LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 10-09-2017, 11:14 AM   #1
gda
Member
 
Registered: Oct 2015
Posts: 103

Rep: Reputation: 25
Problem in umounting filesystem on NFS/SMBFS HA cluster!


Dear all,

quite a lot of time ago I have configured two NFS/SMBFS High Availability Slackware clusters. The clustering is made via Heartbeat.

In short, I have a primary cluster exporting via NFS and Samba several filesystems. The secondary cluster is intended to take over all the resources in case the primary cluster is dead (or unavailable).

This configuration works just fine except for the following annoying problem: when I try to stop heardbeat service on the primary cluster (for maintenance for example) for some reason certain filesystems do not get umounted successfully. Heartbeat keeps on trying to umount for a while and then the server crashes completely rebooting autonomously (which is of course really very bad!!).
What makes the situation even more strange is that the problem does not occur always! I have realized that if I stop heartbeat service on the primary server right after heartbeat has been started everything work perfectly (the secondary cluster takes over all the resources without any problem!!). It seems the problem starts to occur only if the primary cluster is up for let's say several days...

Looking at the log files I found this:

Code:
Sep 15 07:11:55 galileo ResourceManager[5538]: [5561]: info: Running /etc/init.d/rc.samba-hb  stop
Sep 15 07:11:57 galileo ResourceManager[5538]: [5580]: info: Running /etc/ha.d/resource.d/IPaddr 192.168.69.28/24/eth1 stop
Sep 15 07:11:57 galileo IPaddr[5607]: [5619]: INFO: ifconfig eth1:1 down
Sep 15 07:11:57 galileo IPaddr[5581]: [5623]: INFO:  Success
Sep 15 07:11:57 galileo ResourceManager[5538]: [5638]: info: Running /etc/ha.d/resource.d/Delay 3 0 stop
Sep 15 07:11:57 galileo Delay[5639]: [5662]: INFO:  Success
Sep 15 07:11:57 galileo ResourceManager[5538]: [5672]: info: Running /etc/init.d/rc.nfsd-hb  stop
Sep 15 07:11:58 galileo ResourceManager[5538]: [5694]: info: Running /etc/ha.d/resource.d/Delay 5 0 stop
Sep 15 07:11:58 galileo Delay[5695]: [5718]: INFO:  Success
Sep 15 07:11:58 galileo ResourceManager[5538]: [5728]: info: Running /etc/init.d/rc.rpc-hb  stop
Sep 15 07:11:58 galileo ntpd[2155]: Deleting interface #12 eth1:1, 192.168.69.28#123, interface stats: received=0, sent=0, dropped=0, active_time=10014031 secs
Sep 15 07:12:02 galileo ResourceManager[5538]: [5752]: info: Running /etc/ha.d/resource.d/Delay 5 0 stop
Sep 15 07:12:02 galileo Delay[5761]: [5765]: INFO: Delay already stopped.
Sep 15 07:12:02 galileo Delay[5753]: [5767]: INFO:  Success
Sep 15 07:12:02 galileo ResourceManager[5538]: [5777]: info: Running /etc/init.d/rc.quota-hb  stop
Sep 15 07:12:02 galileo ResourceManager[5538]: [5796]: info: Running /etc/ha.d/resource.d/Filesystem /dev/mapper/3600c0ff000109896877ab94d01000000p1 /mnt/raw_cr ext4 stop
Sep 15 07:12:02 galileo Filesystem[5805]: [5839]: INFO: Running stop for /dev/mapper/3600c0ff000109896877ab94d01000000p1 on /mnt/raw_cr
Sep 15 07:12:02 galileo Filesystem[5805]: [5854]: INFO: Trying to unmount /mnt/raw_cr
Sep 15 07:12:05 galileo Filesystem[5805]: [5862]: INFO: unmounted /mnt/raw_cr successfully
Sep 15 07:12:05 galileo Filesystem[5797]: [5870]: INFO:  Success
Sep 15 07:12:05 galileo ResourceManager[5538]: [5885]: info: Running /etc/ha.d/resource.d/Filesystem /dev/mapper/3600c0ff0001098965c7ab94d010000
00p1 /mnt/backup ext4 usrquota,grpquota stop
Sep 15 07:12:05 galileo Filesystem[5894]: [5928]: INFO: Running stop for /dev/mapper/3600c0ff0001098965c7ab94d01000000p1 on /mnt/backup
Sep 15 07:12:05 galileo Filesystem[5894]: [5943]: INFO: Trying to unmount /mnt/backup
Sep 15 07:12:05 galileo Filesystem[5894]: [5951]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:05 galileo Filesystem[5894]: [5954]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:06 galileo Filesystem[5894]: [5963]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:06 galileo Filesystem[5894]: [5966]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:07 galileo Filesystem[5894]: [5975]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:07 galileo Filesystem[5894]: [5978]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:08 galileo Filesystem[5894]: [5987]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:08 galileo Filesystem[5894]: [5990]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:09 galileo Filesystem[5894]: [5999]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:09 galileo Filesystem[5894]: [6002]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:10 galileo Filesystem[5894]: [6011]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:10 galileo Filesystem[5894]: [6014]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:11 galileo Filesystem[5894]: [6023]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:11 galileo Filesystem[5894]: [6026]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:12 galileo Filesystem[5894]: [6035]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:13 galileo Filesystem[5894]: [6038]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:14 galileo Filesystem[5894]: [6047]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:14 galileo Filesystem[5894]: [6050]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:15 galileo Filesystem[5894]: [6059]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:15 galileo Filesystem[5894]: [6062]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:16 galileo Filesystem[5894]: [6071]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:16 galileo Filesystem[5894]: [6074]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:17 galileo Filesystem[5894]: [6083]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:17 galileo Filesystem[5894]: [6086]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:19 galileo Filesystem[5894]: [6107]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:19 galileo Filesystem[5894]: [6110]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:20 galileo Filesystem[5894]: [6119]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:20 galileo Filesystem[5894]: [6122]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:21 galileo Filesystem[5894]: [6131]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:21 galileo Filesystem[5894]: [6134]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:22 galileo Filesystem[5894]: [6143]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:22 galileo Filesystem[5894]: [6146]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:23 galileo Filesystem[5894]: [6155]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:23 galileo Filesystem[5894]: [6158]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:24 galileo Filesystem[5894]: [6167]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:24 galileo Filesystem[5894]: [6170]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:25 galileo Filesystem[5894]: [6179]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:25 galileo Filesystem[5894]: [6182]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:26 galileo Filesystem[5894]: [6185]: ERROR: Couldn't unmount /mnt/backup, giving up!
Sep 15 07:12:26 galileo Filesystem[5886]: [6193]: ERROR:  Generic error
Sep 15 07:12:26 galileo ResourceManager[5538]: [6195]: ERROR: Return code 1 from /etc/ha.d/resource.d/Filesystem
Sep 15 07:12:27 galileo ResourceManager[5538]: [6199]: info: Retrying failed stop operation [Filesystem::/dev/mapper/3600c0ff0001098965c7ab94d01000000p1::/mnt/backup::ext4::usrquota,grpquota]
Sep 15 07:12:27 galileo ResourceManager[5538]: [6214]: info: Running /etc/ha.d/resource.d/Filesystem /dev/mapper/3600c0ff0001098965c7ab94d01000000p1 /mnt/backup ext4 usrquota,grpquota stop
Sep 15 07:12:27 galileo Filesystem[6223]: [6257]: INFO: Running stop for /dev/mapper/3600c0ff0001098965c7ab94d01000000p1 on /mnt/backup
Sep 15 07:12:27 galileo Filesystem[6223]: [6272]: INFO: Trying to unmount /mnt/backup
Sep 15 07:12:27 galileo Filesystem[6223]: [6280]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:27 galileo Filesystem[6223]: [6283]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:28 galileo Filesystem[6223]: [6292]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:28 galileo Filesystem[6223]: [6295]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:29 galileo Filesystem[6223]: [6304]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:29 galileo Filesystem[6223]: [6307]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:30 galileo Filesystem[6223]: [6316]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:30 galileo Filesystem[6223]: [6319]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:31 galileo Filesystem[6223]: [6328]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:31 galileo Filesystem[6223]: [6331]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:32 galileo Filesystem[6223]: [6340]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:32 galileo Filesystem[6223]: [6343]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:33 galileo Filesystem[6223]: [6352]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:33 galileo Filesystem[6223]: [6355]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:34 galileo Filesystem[6223]: [6364]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:34 galileo Filesystem[6223]: [6367]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:35 galileo Filesystem[6223]: [6376]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:35 galileo Filesystem[6223]: [6379]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:36 galileo Filesystem[6223]: [6388]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with TERM
Sep 15 07:12:36 galileo Filesystem[6223]: [6391]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:37 galileo Filesystem[6223]: [6400]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:37 galileo Filesystem[6223]: [6403]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:38 galileo Filesystem[6223]: [6412]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:38 galileo Filesystem[6223]: [6415]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:46 galileo Filesystem[6223]: [6508]: ERROR: Couldn't unmount /mnt/backup; trying cleanup with KILL
Sep 15 07:12:46 galileo Filesystem[6223]: [6511]: INFO: No processes on /mnt/backup were signalled
Sep 15 07:12:47 galileo Filesystem[6223]: [6514]: ERROR: Couldn't unmount /mnt/backup, giving up!
Sep 15 07:12:47 galileo Filesystem[6215]: [6522]: ERROR:  Generic error
Sep 15 07:12:47 galileo ResourceManager[5538]: [6524]: ERROR: Return code 1 from /etc/ha.d/resource.d/Filesystem
Sep 15 07:12:48 galileo ResourceManager[5538]: [6528]: info: Retrying failed stop operation [Filesystem::/dev/mapper/3600c0ff0001098965c7ab94d01000000p1::/mnt/backup::ext4::usrquota,grpquota]
Sep 15 07:12:48 galileo ResourceManager[5538]: [6543]: info: Running /etc/ha.d/resource.d/Filesystem /dev/mapper/3600c0ff0001098965c7ab94d01000000p1 /mnt/backup ext4 usrquota,grpquota stop
...
As you can see both NFS and Samba services seem to be stopped successfully. Then the unmounting of all the filesystems begins and here the problems starts to happen... In particular heartbeat is not able to umount (at least!) the filesystem (EXT4) mounted on /mnt/backup and, as a consequence, it tries to clean up fist using TERM and then KILL all the processes involving files on /mnt/backup (the command used by heartbeat for that is "fuser TERM/KILL -m -k /mnt/backup"). Anyway both these commands return no processes to kill and the umounting operation fails... At this point the Resource Manager keeps on trying to retry the failed stop operation up to the server crashes and reboots...

So it seems there are some processes preventing the filesystem mounted on /mnt/backup to be umounted which are not detected by fuser. Any idea on what kind of processes could be and how I can kill them??

Any help is really appreciated. Thanks a lot!

Regards

Last edited by gda; 10-09-2017 at 11:38 AM.
 
Old 10-09-2017, 11:39 AM   #2
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 6,986
Blog Entries: 14

Rep: Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187
Rather than using fuser to determine what processes are using a given filesystem you might want to try lsof. In the past I've found that fuser occasionally ignores some things that lsof will report. (e.g. If you cd to the mount then su to another user such as root and cd to a separate directory your original login is still busying out the filesystem but your cd makes it sometimes appear that terminal isn't.)

On occasion NFS mounts are lost while something is busying them out so you can't unmount. Using the "-l" option with umount does a "lazy unmount" which tells it not to wait for processes to die.
 
Old 10-09-2017, 12:03 PM   #3
gda
Member
 
Registered: Oct 2015
Posts: 103

Original Poster
Rep: Reputation: 25
Thanks a lot for the fast reply!

Quote:
Originally Posted by MensaWater View Post
Rather than using fuser to determine what processes are using a given filesystem you might want to try lsof. In the past I've found that fuser occasionally ignores some things that lsof will report. (e.g. If you cd to the mount then su to another user such as root and cd to a separate directory your original login is still busying out the filesystem but your cd makes it sometimes appear that terminal isn't.)
Interesting...

This is the output of lsof:
Code:
root@galileo:/var/log# lsof | grep /mnt/backup 
smbd       3135       root  cwd       DIR              253,9     4096          2 /mnt/backup
smbd      28444       root  cwd       DIR              253,9     4096          2 /mnt/backup
And this is the output of fuser:
Code:
root@galileo:/var/log# fuser -m /mnt/backup/
/mnt/backup/:         3135c 28444c
So at least at the moment the two commands return the same processes.... The problem is that at moment I don't know what happen if I stop heartbeat... Unfortunately I cannot make such kind of tests easily because this is a production environment... Anyway I will keep in mind to check also what lsof returns in case the problem will show up again...

Quote:
Originally Posted by MensaWater
On occasion NFS mounts are lost while something is busying them out so you can't unmount. Using the "-l" option with umount does a "lazy unmount" which tells it not to wait for processes to die.
Actually the option "-f" is used to umount NFS and SMBFS mounts which I guess produces a similar effect of a lazy umount. Is that correct?
 
Old 10-09-2017, 02:18 PM   #4
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 6,986
Blog Entries: 14

Rep: Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187
Quote:
Originally Posted by gda View Post
Actually the option "-f" is used to umount NFS and SMBFS mounts which I guess produces a similar effect of a lazy umount. Is that correct?
The "-f" is "force" and does NOT always work to do the unmount which is why I needed to find another way and learned about the "-l" option. Ideally a simple "unmount" should work and the "-f" and "-l" are only being done as last resorts. I usually have used "-l" when the server sharing the filesystem has gone down and one of the clients is hanging processes because it can't traverse the list of mounts (e.g. commands like "df" hang when they get to the mount). Some processes such as quota check NFS mounts on login even if no quotas are set and will hang.
 
Old 10-09-2017, 04:04 PM   #5
gda
Member
 
Registered: Oct 2015
Posts: 103

Original Poster
Rep: Reputation: 25
You are right! I have checked the two options and actually there are cases for which the option "-f" may fail and "-l" may work.

Two additional points not fully clear to me:

1 - let's suppose a filesystem can be umounted succesfully without using the option "-l". Is there any risk in umounting this filesystem with the option "-l"?

2 - I suppose it is possibile to use the options "-l" and "-f" at the same time. What's happen in this case exactly? Which option takes the priority? Does it make any sense to use both options?

Thanks again for the useful help!
 
Old 10-13-2017, 10:14 AM   #6
gda
Member
 
Registered: Oct 2015
Posts: 103

Original Poster
Rep: Reputation: 25
UPDATE

I tried to stop manually all the services started by Heartbeat to try to debug the problem. It seems the umounting problem shows up only for one single volume. The server exports a total of 5 different volumes. After having stopped NFS, Samba,.... I was able to umount regularly only 4 volumes while for the volume mounted on /mnt/backup I got this:

Code:
root@galileo:~# umount /mnt/backup     
umount: /mnt/backup: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
root@galileo:~# lsof | grep /mnt/backup
root@galileo:~# fuser -m /mnt/backup
root@galileo:~# umount -f /mnt/backup
umount2: Device or resource busy
umount: /mnt/backup: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
umount2: Device or resource busy
root@galileo:~# umount -l /mnt/backup
root@galileo:~#
As you can see the commands "umount" and "umount -f" do not work and both "lsof" and "fuser" do not return any process to kill... Finally, as suggested by MensaWater, I was able to umount the volume using the option "-l".

Now I'm trying to understand why all this happens only for this particular volume. The only difference with respect to the others is that it is an EXT4 volume on which the quota has been turned on. As I have other EXT4 volumes (without quota) which get umounted correctly I suppose the problem is somehow related to the quota.

This is the script I use to turn on and off the quota:
Code:
#!/bin/bash

quota_start() {
# Check quotas and then turn quota system on:
if grep -q quota /etc/mtab ; then
  for quotafs in $(awk '/quota/ {print $2}' /etc/mtab) ; do
    /bin/rm -f $quotafs/{a,}quota.{group,user}.new
  done
  if [ -x /sbin/quotacheck ]; then
    /sbin/quotacheck -augm
  fi
  if [ -x /sbin/quotaon ]; then
    /sbin/quotaon -aug
  fi
  touch /var/run/quota.lock
fi
}

quota_stop() {
# Try to turn off quota.
if /bin/grep -q quota /etc/mtab ; then
  if [ -x /sbin/quotaoff ]; then
    /sbin/quotaoff -a
  fi
fi
if [ -f /var/run/quota.lock ]; then
 rm /var/run/quota.lock
fi
}

quota_mystart(){
# Check first if quota is are already running. If not start it.
if [ ! -f /var/run/quota.lock ]; then 
  quota_start
fi
}

case "$1" in
'start')
  quota_mystart
  ;;
'stop')
  quota_stop
  ;;
*)
  echo "usage $0 start|stop"
esac
Each time Heartbeat starts, this script is ran right after all the volumes have been mounted (using the option "start"). On the other hand when Heartbeat stops the same script is ran BEFORE all the volumes are tried to be umounted (using the option "stop"). No errors are logged when the quota is turned on or off by using this script. The only strange thing I see is a quite long time needed for the quota to be turned on (few minutes). The size of the volume is about 2TB.
Moreover, to enable to quota the volume is mounted using the options "usrquota,grpquota".

Do you see anything wrong in what I'm doing in turning on and off the quota?

Finally, as the option "-l" seems to umount the volume could I set Heartbeat to umount ALWAYS ALL the volumes using this option? Is it safe to use this option on a volume that in principle can be umounted without it?

Thanks a lot for your help!

Last edited by gda; 10-13-2017 at 11:39 AM.
 
Old 10-13-2017, 01:46 PM   #7
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 6,986
Blog Entries: 14

Rep: Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187Reputation: 1187
Rather than "lsof | grep /mnt/backup" try just "lsof /mnt/backup".
 
Old 10-20-2017, 11:30 AM   #8
gda
Member
 
Registered: Oct 2015
Posts: 103

Original Poster
Rep: Reputation: 25
UPDATE #2

I tried again today... same story...

Code:
root@galileo:~# 
root@galileo:~# umount /mnt/backup 
umount: /mnt/backup: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
root@galileo:~# umount -f /mnt/backup
umount2: Device or resource busy
umount: /mnt/backup: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
umount2: Device or resource busy
root@galileo:~# fuser -m /mnt/backup 
root@galileo:~# lsof /mnt/backup 
root@galileo:~# umount -l /mnt/backup 
root@galileo:~#
It seems the only option I have to umount the volume is to use the option "-l". I really don't understand why this happens... as I have already stressed before I think the problem is related to the quota I have turned on the this ext4 volume.
So once again I re-propose here my old questions:

1) Do you see anything wrong in what I'm doing in turning on and off the quota (see my previous post)?

2) Can the option "-l" be used to clearly umount a volume that would be umounted successfully without it? Is it safe?

Thanks for your help!

Regards!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
HA nfs cluster problem (with pacemaker, openais, nfs and drbd ) ratotopi Linux - Server 0 11-08-2013 11:44 PM
umounting NFS jonfa Linux - Networking 9 12-14-2007 03:01 PM
umounting smbfs ping_wing Linux - General 6 04-25-2005 12:35 PM
Umounting NFS shared directory matta Linux - Networking 2 03-09-2004 05:31 AM
umounting NFS Syncrm Linux - General 4 10-11-2002 07:55 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 01:12 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration