LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (http://www.linuxquestions.org/questions/linux-general-1/)
-   -   Why can't I unmount my second hard drive? (http://www.linuxquestions.org/questions/linux-general-1/why-cant-i-unmount-my-second-hard-drive-676144/)

websissy 10-13-2008 07:17 PM

Why can't I unmount my second hard drive?
 
I'm having a problem with my secondary hard drive. The problem is even though sdb1 is NOT listed in /etc/fstab and I have NOT explicitly mounted it, Linux insists sdb1 is mounted if I try to run even a read-only e2fsck or a fsck on that partition.

In short, when I type:

Code:

e2fsck -nf /dev/sdb1
I get:

Code:

Warning! /dev/sdb1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.

But when I try to umount the drive, Linux insists it isn't there!

Code:

umount /dev/sdb1
umount replies,

Code:

umount: /dev/sdb1: not mounted
There is a directory named /sdb1 in the main file system directory; but it shows as completely empty.

This has been going on since my server host's techs reinstalled the OS on the primary boot drive in mid-August. There seems to be NOTHING I can do from here to unmount that second drive.

Can anyone here tell me why linux reports that /dev/sdb1 canít be unmounted because it isn't mounted; but e2fsck insists it IS mounted? If you can, can you also tell me how to unmount this stubborn device?

Thanks!

AuroraCA 10-13-2008 07:23 PM

What is the path you are on when entering the unmount command?

Try changing to / directory and try your commands again.

rabbit2345 10-13-2008 09:46 PM

try running just the simple mount command, and post that up here. that should give you a list of all the mounted filesystems on your machine.

websissy 10-13-2008 11:39 PM

Here's the output from every relevant command I (or anyone else) has been able to can think of. Do any of them help?

Code:

---------
cat /etc/fstab
---------
myserver:~# cat ,'etc/fstab

#        /etc/fstab: static file system information.
#
 <file system> <mount point>        <type>                <options>        <dump> <pass>
 proc                proc                proc                defaults        0        0
 /dev/sdal        /                ext3                defaults,errors=remount-ro 0
 /dev/sda5        none                swap                sw        0        0
 /dev/hda        /media/cdrom0        udf,iso9660        user,noauto        0        0

 myserver:~#

Code:

---------
myserver:~# mount
---------

/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,yid=5,mode=620)
myserver:~# _

Code:

---------
umount
---------

myserver:~# umount sdb1
umount: sdb1: not mounted

myserver:~# umount /sdb1
umount: /sdb1: not mounted

myserver:/~# umount /dev/sdb1
umount: /dev/sdb1: not mounted

Code:

---------
cat /etc/mtab
---------

myserver:~# cat /etc/mtab
/dev/sda1 / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 8 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
proc/bus/usb /proc/bus/usb usbfs rw 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0

Code:

---------
cat /proc/mounts
---------

rootfs / rootfs rw 0 0
none /sys sysfs rw 0 0
none /proc proc rw 0 0
udev /dev tmpfs rw 0 0
/dev/sdb1 / ext3 rw,data=ordered 0 8
/dev/sdb1 /dev/.static/dev ext3 rw,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid 8 0
usbfs /proc/bus/usb usbfs rw,nosuid,nodev,noexec 0 0
tmpfs /dev/shm tmpfs rw,nosuid.nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec 0 0

Code:

---------
cat /proc/scsi/scsi
---------

myserver:~# cat /proc/scsi/scsi

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA  Model: ST3500630AS      Rev: 3.AA
  Type:  Direct-Access                  ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA  Model: ST3500630AS      Rev: 3.AA
  Type:  Direct-Access                  ANSI SCSI revision: 05

Code:

myserver:~# ls -l /dev/sda1

brw-rw---- 1 root disk 8, 1 Oct 13 16:10 /dev/sda1

myserver:~# ls -l /dev/sdb1

brw-rw---- 1 root disk 8, 17 Oct 13 16:10 /dev/sdb1

myserver:~#

Code:

---------
fdisk -l
---------

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device    Boot  Start  End  Blocks      Id System
/dev/sda1  *  1  59327  476544096    83 Linux
/dev/sda2      59328  60801  11839905    5 Extended
/dev/sda5      59328  60801  11839873+    82 Linux swap / Solaris

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16865 * 512 = 8225280 bytes

Device    Boot  Start  End  Blocks      Id System
/dev/sdb1  *  1  59327  476544096  83 Linux
/dev/sdb2      59328  60801  11839905  5  Extended
/dev/sdb5      59328  60801  11839873+    82 Linux swap / Solaris

myserver:~#


websissy 10-13-2008 11:51 PM

Sorry... somehow I managed to double-post my response.

websissy 10-13-2008 11:58 PM

Quote:

Originally Posted by AuroraCA (Post 3309170)
What is the path you are on when entering the unmount command?

Try changing to / directory and try your commands again.

I did try this from the root of the file system as the root user. No change.
:(

htnakirs 10-14-2008 02:10 AM

What happens when you physically disconnect the drive?

websissy 10-14-2008 09:41 AM

Quote:

Originally Posted by htnakirs (Post 3309427)
What happens when you physically disconnect the drive?

I believe my next post will answer your question... But I can't really DO that test because the server and I are 1,500 miles apart.

I've figured out the cause; but not the solution yet.

websissy 10-14-2008 09:43 AM

Somehow it's a 'grub' thing...
 
After sleeping on this a few hours, I managed to figure out the reason /dev/sdb1 can't be dismounted is because grub somehow has it specified as the primary boot device. In short, it appears the system is actually taking its boot stuff from the second drive and its data and everything else from the "primary" (first) drive.

I proved this by (just for the helluvit) deliberately trying to unmount sda1 (umount /dev/sda1) while the system was running in single user mode. When it reported the unmount was successful, I KNEW there was something screwy going on. After that, I did a test run of e2fsck in "do-no-harm" mode:

e2fsck -nf /dev/sda1

and discovered Linux did not complain /dev/sda1 was mounted -- even though it was originally the primary boot device in this system. After that test ran and reported no errors, I tried:

e2fsck -p /dev/sda1

Again I got no complaints and the process ran to completion without errors or problems.

On the other hand, when I try to umount /dev/sdb1, umount STILL insists the device isn't mounted but when I try to:

e2fsck -nf /dev/sdb1

it says:

Code:

Warning!  /dev/sdb1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.

So, I did some further digging and eventually realized there's some sort of trick being played in the grub setup that's boots the system in read-only mode from the second drive, sdb1 (thus protecting the OS and all of its components from being deleted) but then remaps everything but the system areas (/usr, /var, /boot and probably /etc) back into the drive image from partition sda1.

This 'boot trick' effectively protects the system areas of the sda1 drive from being wiped out accidentally -- which happened on this box back in August (while we were still in sandbox mode) when an overlooked synonym and a mount of sdb1 in a temporary work directory I removed with rm -rf accidentally wiped out BOTH drives at once. :(

Bottom line: I now know the issue is being caused by grub. But even after hours of carefully studying the grub docs and trying to force the boot to occur from sda1 again, I've been unable to figure out how to do that.

Are there any Grub experts around ANYWHERE who can offer some advice? As far as I'm concerned, that program is completely hopeless!

Thanks.

tredegar 10-14-2008 12:31 PM

The LQ member saikee is the forum's grub guru.

Search for posts by him, read relevant ones, and see the links in his sig.

If you still cannot fix the problem, perhaps if you send him a polite email with a link to this thread, he'll be able to help.

I am reluctant to offer advice, although I am happy with grub, as I have sometimes messed things up, and needed to boot from a live CD to sort things out. But my server is upstairs, whilst your server is 1500 miles away.

jiml8 10-14-2008 01:13 PM

I have not seen saikee around for awhile. Wonder where he is.

In any event, grub is handled from the file menu.lst which should be in /boot/grub. Copy that file here.

Also, what is in the file device.map (if it exists), which is also in /boot/grub ?

edit:

Also, both fstab and mtab list sda1 as / but /proc/mounts lists sdb1 as / and also lists sdb1 as /dev/.static/dev. What is in /dev/.static?

websissy 10-14-2008 01:30 PM

[SOLVED!] Grub was the INDEED culprit...
 
Okay, I figured it out. The way grub was set up on my server it's primary boot device was set as /dev/sdb1 (a.k.a. hd1) in the grub startup menu defaults.

So even though everything ELSE for the system was drawn from /dev/sda1 (e.g. all of the web content) the boot utilities and linux itself were coming from /dev/sdb1. In short, the system portion of sda1 wasn't being used. This seems to have served the DUAL purpose of protecting the primary drive's bootstrap from accidental deletion and ensuring that there was always a good un-modified bootstrap on the system no matter what happened. I guess it might also be a way to protect the main system bootstrap against rootkits, for example.

Here's an example of what those 'AutoMagic' parameters in /boot/grub/menu.lst look like:

Code:

---------
BEFORE
---------

## ## End Default Options ##

title          Debian GNU/Linux, kernel 2.6.18-6-amd64
root            (hd1,0)      <=========== for sda1, make this hd0
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sdb1 ro    <=========== change sdb1 to sda1
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

title          Debian GNU/Linux, kernel 2.6.18-6-amd64 (single-user mode)
root            (hd1,0)      <=========== for sda1, make this hd0
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sdb1 ro single    <=========== change sdb1 to sda1
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST


Code:

---------
AFTER
---------

## ## End Default Options ##

title          Debian GNU/Linux, kernel 2.6.18-6-amd64
root            (hd0,0)      <=========== note change!
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sda1 ro    <=========== note change!
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

title          Debian GNU/Linux, kernel 2.6.18-6-amd64 (single-user mode)
root            (hd0,0)      <=========== note change!
kernel          /boot/vmlinuz-2.6.18-6-amd64 root=/dev/sda1 ro single    <=========== note change!
initrd          /boot/initrd.img-2.6.18-6-amd64
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

This "AutoMagic" change of boot drives was all being handled behind the scenes at boot time by Grub. It apparently grabs its startup parameters from /boot/menu.lst on the first partition of the first drive where it finds a bootstrap and then uses them to control where to go from there. In our case all four of the "AutoMagic" parameters there for both boot methods (multi-user and single-user [aka 'maintenance mode']) were set to use /dev/sdb1 and NOT /dev/sda1

Grub's boot parameters can be changed by the admin by going to the system console (or accessing the system via KVM) and waiting for the grub boot options menu to appear (assuming of course that grub was configured during installation to display this menu). The arrow keys can be used there to select the boot method you want to use and THEN the admin can press "e" to edit and change the start-up parameters for that boot method. Once those parameters have been changed if they include the "savedefault" option, the parameters should be saved for the next reboot. However, in our case I had to tinker with it a bit to make sure the new startup changes got saved.

That explains why e2fsck and fsck both detected /dev/sdb1 as "in use". Because they were designed to protect the filesystem, those utilities were written to be smart enough not to be fooled by boot tricks played by grub. However, the Linux mount utilities and unmount utilities ARE fooled by grub's trickery.

I have a hunch this technique may be in some ways unique to the Debian Linux distro. However, the rules and methods described here for grub should probably apply to most other distribtuions as well.

So, there you have it. I hope this technique and what I've learned here eventually helps someone else.

Thanks a bunch for your efforts to help!

jiml8 10-14-2008 01:38 PM

Quote:

This "AutoMagic" change of boot drives was all being handled behind the scenes at boot time by Grub. It apparently grabs its startup parameters from /boot/menu.lst on the first partition of the first drive where it finds a bootstrap and then uses them to control where to go from there. In our case all four of the "AutoMagic" parameters there for both boot methods (multi-user and single-user [aka 'maintenance mode']) were set to use /dev/sdb1 and NOT /dev/sda1
Not quite right.

The boot drive is set by the BIOS. On startup, BIOS will branch to the specified boot drive and expects to begin executing a bootloader that is in the master boot record. BIOS will fail if there is no bootloader.

If the bootloader is grub, it expects to find more of itself at a specific location on the drive it is booting from - and it knows this location in absolute terms, regardless of filesystem. When you tell grub to install itself, you specify which drive it puts its MBR code onto, and you also specify which drive contains the root filesystem - including the second part of grub.

So, unless you have reinstalled grub on sda, you presently are booting initially from sdb, then transferring control to sda. This works fine, but you'll wind up getting bitten if you don't understand it and at a later point delete /boot/grub from sdb or otherwise rearrange the system.

You would be best off reinstalling grub on sda.

websissy 11-13-2008 12:01 PM

Quote:

Originally Posted by jiml8 (Post 3309971)
Not quite right.

...the rest of jiml8's message appears above; but he concludes...

You would be best off reinstalling grub on sda.

Thanks very much for the feedback, jiml8! Sorry it took me a month to catch up with your post -- which coincidentally occurred just 8 minutes after mine; but I'm not a regular here unless I'm chasing a problem and as you saw in my last post, I thought I'd figured out this problem and solved it a month ago. :o

Nonetheless, because I'm a cautious soul, I hadn't done a drive to drive dd backup copying drive sda to drive sdb in the past month because I kept expecting some related-issue to turn up somewhere. I had planned to run my first full backup since 10-14 last night when the proverbial defecation hit the perennial ventilation in a completely unrelated area.

While researching the underlying cause of that new issue here, I spotted your message. The trouble is, I'm not sure exactly what you're telling me. Here's the way I interpreted your reply and the questions it raises in my mind.


Here's what I THINK you said...

1. I had basically concluded our server had been rebooting from sdb since the webhost rebuilt the drives in mid-August. I needed it
to boot from sda again. I thought I fixed that by doing what I did last month but I gather you're telling me I did NOT fix it. Did I get that right?

2. Where I lost you was in the remarks about the boot drive being set in the BIOS. I think that implies a hardware setup change would need to occur to change which drive we're booting from. Is that right?

If so, I'm not sure I can convince my server host's techs to make such a change because under their rules that would be beyond the scope of the limited support they're willing to provide.

3. That means I also need to ask: Is this BIOS setup change one I can make from here with KVM access or does it require hands on at the server's console which is 1,500 miles away from me? If hands-on is required I can't do that.

4. I think you're also saying you assume there's no grub boot-up capability installed on sda and that must be corrected by installing grub on sda before I change the bios settings to boot from sda. Is that understanding also correct?

5. If I got #3 and 4 right, do I need handson-server console access to install grub on sda? If so, that's impossible.


At that point, my last 3 questions are:

6. Is there some remote way to CHECK to see if all required parts of grub are installed on sda?

and

7. Is it possible for me to COPY all required parts of grub from drive sdb to sda or does grub need to be installed from some sort of setup disk?

8. Is there a procedure or HowTo that you know of somewhere that describes how to install (or reinstall) grub? The docs for this program aren't the worst I've ever seen, but I'd definitely put them in 2nd or 3rd place! This system did at one time boot from sda before an errant rm -f command destroyed both drives in a mushroom cloud. :(

Thanks very much for your feedback, jiml8. I sincerely appreciate it! I hope I've understood you correctly and that you'll take a minute to answer these questions too.

Thanks VERY much!


All times are GMT -5. The time now is 02:54 AM.