Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm having a problem with my secondary hard drive. The problem is even though sdb1 is NOT listed in /etc/fstab and I have NOT explicitly mounted it, Linux insists sdb1 is mounted if I try to run even a read-only e2fsck or a fsck on that partition.
In short, when I type:
Code:
e2fsck -nf /dev/sdb1
I get:
Code:
Warning! /dev/sdb1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
But when I try to umount the drive, Linux insists it isn't there!
Code:
umount /dev/sdb1
umount replies,
Code:
umount: /dev/sdb1: not mounted
There is a directory named /sdb1 in the main file system directory; but it shows as completely empty.
This has been going on since my server host's techs reinstalled the OS on the primary boot drive in mid-August. There seems to be NOTHING I can do from here to unmount that second drive.
Can anyone here tell me why linux reports that /dev/sdb1 can’t be unmounted because it isn't mounted; but e2fsck insists it IS mounted? If you can, can you also tell me how to unmount this stubborn device?
---------
myserver:~# mount
---------
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,yid=5,mode=620)
myserver:~# _
Code:
---------
umount
---------
myserver:~# umount sdb1
umount: sdb1: not mounted
myserver:~# umount /sdb1
umount: /sdb1: not mounted
myserver:/~# umount /dev/sdb1
umount: /dev/sdb1: not mounted
myserver:~# ls -l /dev/sda1
brw-rw---- 1 root disk 8, 1 Oct 13 16:10 /dev/sda1
myserver:~# ls -l /dev/sdb1
brw-rw---- 1 root disk 8, 17 Oct 13 16:10 /dev/sdb1
myserver:~#
Code:
---------
fdisk -l
---------
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 59327 476544096 83 Linux
/dev/sda2 59328 60801 11839905 5 Extended
/dev/sda5 59328 60801 11839873+ 82 Linux swap / Solaris
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16865 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 59327 476544096 83 Linux
/dev/sdb2 59328 60801 11839905 5 Extended
/dev/sdb5 59328 60801 11839873+ 82 Linux swap / Solaris
myserver:~#
After sleeping on this a few hours, I managed to figure out the reason /dev/sdb1 can't be dismounted is because grub somehow has it specified as the primary boot device. In short, it appears the system is actually taking its boot stuff from the second drive and its data and everything else from the "primary" (first) drive.
I proved this by (just for the helluvit) deliberately trying to unmount sda1 (umount /dev/sda1) while the system was running in single user mode. When it reported the unmount was successful, I KNEW there was something screwy going on. After that, I did a test run of e2fsck in "do-no-harm" mode:
e2fsck -nf /dev/sda1
and discovered Linux did not complain /dev/sda1 was mounted -- even though it was originally the primary boot device in this system. After that test ran and reported no errors, I tried:
e2fsck -p /dev/sda1
Again I got no complaints and the process ran to completion without errors or problems.
On the other hand, when I try to umount /dev/sdb1, umount STILL insists the device isn't mounted but when I try to:
e2fsck -nf /dev/sdb1
it says:
Code:
Warning! /dev/sdb1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
So, I did some further digging and eventually realized there's some sort of trick being played in the grub setup that's boots the system in read-only mode from the second drive, sdb1 (thus protecting the OS and all of its components from being deleted) but then remaps everything but the system areas (/usr, /var, /boot and probably /etc) back into the drive image from partition sda1.
This 'boot trick' effectively protects the system areas of the sda1 drive from being wiped out accidentally -- which happened on this box back in August (while we were still in sandbox mode) when an overlooked synonym and a mount of sdb1 in a temporary work directory I removed with rm -rf accidentally wiped out BOTH drives at once.
Bottom line: I now know the issue is being caused by grub. But even after hours of carefully studying the grub docs and trying to force the boot to occur from sda1 again, I've been unable to figure out how to do that.
Are there any Grub experts around ANYWHERE who can offer some advice? As far as I'm concerned, that program is completely hopeless!
Search for posts by him, read relevant ones, and see the links in his sig.
If you still cannot fix the problem, perhaps if you send him a polite email with a link to this thread, he'll be able to help.
I am reluctant to offer advice, although I am happy with grub, as I have sometimes messed things up, and needed to boot from a live CD to sort things out. But my server is upstairs, whilst your server is 1500 miles away.
Okay, I figured it out. The way grub was set up on my server it's primary boot device was set as /dev/sdb1 (a.k.a. hd1) in the grub startup menu defaults.
So even though everything ELSE for the system was drawn from /dev/sda1 (e.g. all of the web content) the boot utilities and linux itself were coming from /dev/sdb1. In short, the system portion of sda1 wasn't being used. This seems to have served the DUAL purpose of protecting the primary drive's bootstrap from accidental deletion and ensuring that there was always a good un-modified bootstrap on the system no matter what happened. I guess it might also be a way to protect the main system bootstrap against rootkits, for example.
Here's an example of what those 'AutoMagic' parameters in /boot/grub/menu.lst look like:
Code:
---------
BEFORE
---------
## ## End Default Options ##
title Debian GNU/Linux, kernel 2.6.18-6-amd64
root (hd1,0) <=========== for sda1, make this hd0
kernel /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sdb1 ro <=========== change sdb1 to sda1
initrd /boot/initrd.img-2.6.18-6-amd64
savedefault
title Debian GNU/Linux, kernel 2.6.18-6-amd64 (single-user mode)
root (hd1,0) <=========== for sda1, make this hd0
kernel /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sdb1 ro single <=========== change sdb1 to sda1
initrd /boot/initrd.img-2.6.18-6-amd64
savedefault
### END DEBIAN AUTOMAGIC KERNELS LIST
Code:
---------
AFTER
---------
## ## End Default Options ##
title Debian GNU/Linux, kernel 2.6.18-6-amd64
root (hd0,0) <=========== note change!
kernel /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sda1 ro <=========== note change!
initrd /boot/initrd.img-2.6.18-6-amd64
savedefault
title Debian GNU/Linux, kernel 2.6.18-6-amd64 (single-user mode)
root (hd0,0) <=========== note change!
kernel /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sda1 ro single <=========== note change!
initrd /boot/initrd.img-2.6.18-6-amd64
savedefault
### END DEBIAN AUTOMAGIC KERNELS LIST
This "AutoMagic" change of boot drives was all being handled behind the scenes at boot time by Grub. It apparently grabs its startup parameters from /boot/menu.lst on the first partition of the first drive where it finds a bootstrap and then uses them to control where to go from there. In our case all four of the "AutoMagic" parameters there for both boot methods (multi-user and single-user [aka 'maintenance mode']) were set to use /dev/sdb1 and NOT /dev/sda1
Grub's boot parameters can be changed by the admin by going to the system console (or accessing the system via KVM) and waiting for the grub boot options menu to appear (assuming of course that grub was configured during installation to display this menu). The arrow keys can be used there to select the boot method you want to use and THEN the admin can press "e" to edit and change the start-up parameters for that boot method. Once those parameters have been changed if they include the "savedefault" option, the parameters should be saved for the next reboot. However, in our case I had to tinker with it a bit to make sure the new startup changes got saved.
That explains why e2fsck and fsck both detected /dev/sdb1 as "in use". Because they were designed to protect the filesystem, those utilities were written to be smart enough not to be fooled by boot tricks played by grub. However, the Linux mount utilities and unmount utilities ARE fooled by grub's trickery.
I have a hunch this technique may be in some ways unique to the Debian Linux distro. However, the rules and methods described here for grub should probably apply to most other distribtuions as well.
So, there you have it. I hope this technique and what I've learned here eventually helps someone else.
This "AutoMagic" change of boot drives was all being handled behind the scenes at boot time by Grub. It apparently grabs its startup parameters from /boot/menu.lst on the first partition of the first drive where it finds a bootstrap and then uses them to control where to go from there. In our case all four of the "AutoMagic" parameters there for both boot methods (multi-user and single-user [aka 'maintenance mode']) were set to use /dev/sdb1 and NOT /dev/sda1
Not quite right.
The boot drive is set by the BIOS. On startup, BIOS will branch to the specified boot drive and expects to begin executing a bootloader that is in the master boot record. BIOS will fail if there is no bootloader.
If the bootloader is grub, it expects to find more of itself at a specific location on the drive it is booting from - and it knows this location in absolute terms, regardless of filesystem. When you tell grub to install itself, you specify which drive it puts its MBR code onto, and you also specify which drive contains the root filesystem - including the second part of grub.
So, unless you have reinstalled grub on sda, you presently are booting initially from sdb, then transferring control to sda. This works fine, but you'll wind up getting bitten if you don't understand it and at a later point delete /boot/grub from sdb or otherwise rearrange the system.
...the rest of jiml8's message appears above; but he concludes...
You would be best off reinstalling grub on sda.
Thanks very much for the feedback, jiml8! Sorry it took me a month to catch up with your post -- which coincidentally occurred just 8 minutes after mine; but I'm not a regular here unless I'm chasing a problem and as you saw in my last post, I thought I'd figured out this problem and solved it a month ago.
Nonetheless, because I'm a cautious soul, I hadn't done a drive to drive dd backup copying drive sda to drive sdb in the past month because I kept expecting some related-issue to turn up somewhere. I had planned to run my first full backup since 10-14 last night when the proverbial defecation hit the perennial ventilation in a completely unrelated area.
While researching the underlying cause of that new issue here, I spotted your message. The trouble is, I'm not sure exactly what you're telling me. Here's the way I interpreted your reply and the questions it raises in my mind.
Here's what I THINK you said...
1. I had basically concluded our server had been rebooting from sdb since the webhost rebuilt the drives in mid-August. I needed it
to boot from sda again. I thought I fixed that by doing what I did last month but I gather you're telling me I did NOT fix it. Did I get that right?
2. Where I lost you was in the remarks about the boot drive being set in the BIOS. I think that implies a hardware setup change would need to occur to change which drive we're booting from. Is that right?
If so, I'm not sure I can convince my server host's techs to make such a change because under their rules that would be beyond the scope of the limited support they're willing to provide.
3. That means I also need to ask: Is this BIOS setup change one I can make from here with KVM access or does it require hands on at the server's console which is 1,500 miles away from me? If hands-on is required I can't do that.
4. I think you're also saying you assume there's no grub boot-up capability installed on sda and that must be corrected by installing grub on sda before I change the bios settings to boot from sda. Is that understanding also correct?
5. If I got #3 and 4 right, do I need handson-server console access to install grub on sda? If so, that's impossible.
At that point, my last 3 questions are:
6. Is there some remote way to CHECK to see if all required parts of grub are installed on sda?
and
7. Is it possible for me to COPY all required parts of grub from drive sdb to sda or does grub need to be installed from some sort of setup disk?
8. Is there a procedure or HowTo that you know of somewhere that describes how to install (or reinstall) grub? The docs for this program aren't the worst I've ever seen, but I'd definitely put them in 2nd or 3rd place! This system did at one time boot from sda before an errant rm -f command destroyed both drives in a mushroom cloud.
Thanks very much for your feedback, jiml8. I sincerely appreciate it! I hope I've understood you correctly and that you'll take a minute to answer these questions too.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.