LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   error in fstab causing the system to land in single user (https://www.linuxquestions.org/questions/linux-general-1/error-in-fstab-causing-the-system-to-land-in-single-user-4175542476/)

paziulek 05-13-2015 08:47 AM

error in fstab causing the system to land in single user
 
Hi,

There are situations where an improper option, or typo in the mount point declaration in fstab was causing the system to go directly into single user mode, nothing unusual.
It becomes very frustrating, if the filesystem that fails to mount is completely not essential to boot a system into multiuser mode/runlevel.
One of my friends formatted a ntfs disk to ext4, the filesystem entry in fstab has been changed accordingly, but the mount options ( for udisks ) have not been modified.
Two weeks later he decided to reboot the system, and it was unable to return back into multiuser, single user CTRL-d prompt, >> WITHOUT ANY << error message, none of the /var/log were populated beside wtmp - kind of a messed up situation!
The system is running Debian v8 Jessie, with systemd, but this is the case across the board, not just Debian. I had a similar situation about 1-2 years ago, when I made a typo on some "scratch", low priority filesystem, unneeded by the system, and unneeded by any of the apps. But when the system restarted, that filesystem failed to mount and landed in single user mode... And I was miles away from the console ( I ask wtf? ) I expressed my feelings back then as well.

This is like having speakers not connected to the system causing it not to boot...

I do not know, but I simply refuse to understand this, and accept this as the proper way of handling >>such<< errors... we are not talking about being unable to mount ROOTFS, but some secondary, not crucial and not required to be able to boot the system. Of course, it should generate errors, and it should make the admin aware about such problem, but not prevent the admin from being unable to troubleshoot the issue from a remote location... this design is poor and nasty and (...).

What do you think? Is this good and nobody should be concerned, simply leave it alone?

Thanks,

Mike

maples 05-13-2015 09:22 AM

Have you tried adding "nofail" to the mount options?

paziulek 05-13-2015 09:29 AM

does anyone see an error message here? there is nothing in /var/log ( only file modified is wtmp )

https://leto7f.storage.yandex.net/rd...rce_default=no

paziulek 05-13-2015 09:32 AM

Quote:

Originally Posted by maples (Post 5361906)
Have you tried adding "nofail" to the mount options?

Well, If I would expect the device to not exist on the next reboot, I probably would, or simply comment out/remove the line... but the default behavior should not prevent the system from starting....

paziulek 05-13-2015 09:38 AM

and again, the photo was made by my friend, after checking for any errors he was unable to find anything that would give some kind of idea.... regardless, let's stick to being connected remotely, and there is no one to work at the console...

maples 05-13-2015 10:36 AM

Quote:

Originally Posted by paziulek (Post 5361912)
Well, If I would expect the device to not exist on the next reboot, I probably would, or simply comment out/remove the line... but the default behavior should not prevent the system from starting....

You say that it's better that a system continue booting normally even if it fails to mount a device, except for /.

However, many people want there to be "bright and loud warning sirens" when something like that happens. For example, if it's a production web server and /var fails to mount, then how can it access the websites in /var/www? What if /usr doesn't mount, and you're using a distro that stores everything in /usr? Or how can users access their files if /home didn't get mounted? With /home, it can become even more problamatic because as users log in, the system might re-create files like .bashrc and some stuff in .config/ that then needs to be removed before the filesystem gets mounted.

There are both advantages and disadvantages to the approach that Linux currently takes. If you have a non-critical drive on your system, I would suggest that you just go ahead and add "nofail" so that you don't run into issues like this.

paziulek 05-13-2015 11:25 AM

Quote:

Originally Posted by maples (Post 5361947)

However, many people want there to be "bright and loud warning sirens" when something like that happens

A critical system is monitored, any change in the mounted filesystems, or usage, is a trigger to send an alarm. Even, if the system restarts without being controlled, I NEVER want ANY user on that system, till I check the consistency of the data, and reason why it decided to reboot. I am the person that controls when and what cannot start, not systemd, nor any other eg. init, that tells me - " No Way Jose - USB mouse is not connected to the system, booting in single user.. "

It is not about system not mounting volumes during a not controlled and unexpected restart, it is about not doing things properly when the restart is controlled.
Any such reboot should wake you up in the middle of the night, and your first question will not be "if all volumes are mounted" - at least my case it will be "wtf happened?". A >brainless< admin will configure a production system so it starts all services automatically, including database(s), etc. - the reason: unforeseen results due to an unplanned or uncontrolled system restart.
BUT
If something like this happens, I want to be able to connect to the system, in the middle of the night, without driving 50 miles, troubleshoot, and take steps prepare the environment for users, and have answers available ( and maybe solutions as well ). But if the system lands in a single user mode, I can do (...).

Thanks for your reply.

Mike

John VV 05-13-2015 02:39 PM

this looks like a minor bug

i have been seeing this about 50 / 50% on a kvm running debian 8

sometimes something is finishing before other things in systemD

this is after all the first debain with it

expect some minor bugs


systemD has bootckeck built it

run it and output a SVG and look at the boot order
Code:

su -
systemd-analyze plot > /boot.svg

and have a look


All times are GMT -5. The time now is 10:02 AM.