error in fstab causing the system to land in single user
Hi,
There are situations where an improper option, or typo in the mount point declaration in fstab was causing the system to go directly into single user mode, nothing unusual. It becomes very frustrating, if the filesystem that fails to mount is completely not essential to boot a system into multiuser mode/runlevel. One of my friends formatted a ntfs disk to ext4, the filesystem entry in fstab has been changed accordingly, but the mount options ( for udisks ) have not been modified. Two weeks later he decided to reboot the system, and it was unable to return back into multiuser, single user CTRL-d prompt, >> WITHOUT ANY << error message, none of the /var/log were populated beside wtmp - kind of a messed up situation! The system is running Debian v8 Jessie, with systemd, but this is the case across the board, not just Debian. I had a similar situation about 1-2 years ago, when I made a typo on some "scratch", low priority filesystem, unneeded by the system, and unneeded by any of the apps. But when the system restarted, that filesystem failed to mount and landed in single user mode... And I was miles away from the console ( I ask wtf? ) I expressed my feelings back then as well. This is like having speakers not connected to the system causing it not to boot... I do not know, but I simply refuse to understand this, and accept this as the proper way of handling >>such<< errors... we are not talking about being unable to mount ROOTFS, but some secondary, not crucial and not required to be able to boot the system. Of course, it should generate errors, and it should make the admin aware about such problem, but not prevent the admin from being unable to troubleshoot the issue from a remote location... this design is poor and nasty and (...). What do you think? Is this good and nobody should be concerned, simply leave it alone? Thanks, Mike |
Have you tried adding "nofail" to the mount options?
|
does anyone see an error message here? there is nothing in /var/log ( only file modified is wtmp )
https://leto7f.storage.yandex.net/rd...rce_default=no |
Quote:
|
and again, the photo was made by my friend, after checking for any errors he was unable to find anything that would give some kind of idea.... regardless, let's stick to being connected remotely, and there is no one to work at the console...
|
Quote:
However, many people want there to be "bright and loud warning sirens" when something like that happens. For example, if it's a production web server and /var fails to mount, then how can it access the websites in /var/www? What if /usr doesn't mount, and you're using a distro that stores everything in /usr? Or how can users access their files if /home didn't get mounted? With /home, it can become even more problamatic because as users log in, the system might re-create files like .bashrc and some stuff in .config/ that then needs to be removed before the filesystem gets mounted. There are both advantages and disadvantages to the approach that Linux currently takes. If you have a non-critical drive on your system, I would suggest that you just go ahead and add "nofail" so that you don't run into issues like this. |
Quote:
It is not about system not mounting volumes during a not controlled and unexpected restart, it is about not doing things properly when the restart is controlled. Any such reboot should wake you up in the middle of the night, and your first question will not be "if all volumes are mounted" - at least my case it will be "wtf happened?". A >brainless< admin will configure a production system so it starts all services automatically, including database(s), etc. - the reason: unforeseen results due to an unplanned or uncontrolled system restart. BUT If something like this happens, I want to be able to connect to the system, in the middle of the night, without driving 50 miles, troubleshoot, and take steps prepare the environment for users, and have answers available ( and maybe solutions as well ). But if the system lands in a single user mode, I can do (...). Thanks for your reply. Mike |
this looks like a minor bug
i have been seeing this about 50 / 50% on a kvm running debian 8 sometimes something is finishing before other things in systemD this is after all the first debain with it expect some minor bugs systemD has bootckeck built it run it and output a SVG and look at the boot order Code:
su - |
All times are GMT -5. The time now is 10:02 AM. |