-   Red Hat (
-   -   Stale NFS Handle without using NFS? (

dapsychous 03-08-2006 08:15 AM

Stale NFS Handle without using NFS?
Hey guys.

I have a RedHat 9 system at work that I use to recover data from dead computers using a BASH script and various third party apps (nothing spectacular cdrecord, text2dos, etc).

My problem is that after mounting the new drive and copying data to a directory on /dev/hda2 mounted on /recovery, I get the error: STALE NFS HANDLE: could not access /t/Smith/My Documents/.

If I attempt to do a directory listing after the script runs, I get:

ls: .: Stale NFS file handle.

This machine does not use NFS. I have tried disabling NFS entirely, and this did not help.

If you unmount and remount the entire /dev/hda2 partition, it miraculously starts working again. This is a problem, however, because there are more operations that take place after file copy (inventory of files, log file generation/printing, burn cd/dvd of data files) Every operation after file copy fails.

I have been fighting this problem for weeks now and have no idea where to look for a setting that will fix it.

Any help would be appreciated.


dapsychous 03-08-2006 08:32 AM

Oh, I should probably mention that this isn't a standard ext2 filesystem. For windows compatibility issues, this partition runs FAT32.

I can post any configuration files you like (i'll star out sensitive info).

dapsychous 03-08-2006 10:19 AM

oop, never mind. Figures that as soon as I ask the question, I find the answer on my own. If you are curious, I have AVG Network Server Edition for Linux running on this machine. The problem occurs when the script runs a virus scan on the recovered data. All the operations after that fail.

Guess I have to remove and reinstall AVG.

dapsychous 03-08-2006 12:44 PM

No, apparently, I was wrong. That didn't fix it. The problem continued to occur after I had disabled AVG and created directories by hand. Mounting and Unmounting no longer works. I have to restart the entire machine for the problem to rectify. Any ideas would be of immense help.

jkmccarthy 10-07-2006 12:11 AM

Stale NFS Handle on local FAT32 data disk
I have just encountered the same problem (again - happened to me once before, about 6-8 months ago) under RedHat Linux 7.3 (running kernel 2.4.20-46.7), where I too have had large FAT32 hard drives mounted as data disks. Files were created (and subsequently became inaccessible) while the dual-boot machine was running Linux (data processing c shell scripts ... fast!-). After the "Stale NFS" error message (and like the previous poster, I too am not running NFS, and the FAT32 disk is mounted on localhost), file listings using /bin/ls show dozens of files with size of zero-bytes whose contents I can no longer access.

Rebooting Linux and remounting the disk does not restore access to the files. Booting the machine to Win2K, the DOS "chkdsk" or equivalent tool identifies the orphaned files as lost and recovers them, but with generic FILE0001.CHK -type names that require further (manual) effort by me to restore them to their original names+locations. Not an especially fun way to spend an evening, so before this strikes me a third time, I'd really appreciate some help.

Disk is 400Gb, parallel ATA, 16Mb cache.

The entry I'm currently using for the FAT32 disk in /etc/fstab is:

/dev/hde1 /work1 vfat noauto,user,owner,rw,exec 0 0

...meaning I mount the disk manually after logging in (which is fine, since this way at least I "own" the files have have read/write privilege from my current login account). The prior (first) time I had this problem (using another PC, and a different disk -- so I don't think problem is hardware related; furthermore disks and file-access work very reliability from Win2K, hard as that may be for some to believe :-), my /etc/fstab entry for that other FAT32 disk was specific to my Linux user id:

/dev/sdc1 /work0 vfat noauto,uid=500,gid=500,umask=000 0 0

... but despite these differences, the two incidents of lost file handles were very similar.

Can anyone offer an explanation or remedy for this problem ?

Thanks in advance for any help,

-- Jim

jkmccarthy 11-25-2006 08:50 PM

Large FAT32 disks -- caution with older Linux kernels
Just a followup for anyone searching and coming across this thread. The problem in my case turned out to be due to using an older Linux kernel (anything older than 2.4.25) that does not support large FAT32 disks (>128Gb), at least not without patching the kernel.

Problem is that once a disk fills up beyond a certain point (causing sectors above the 128Gb point on the disk to start being used ?), older Linux kernels will corrupt the file directories causing newer files to be reported as having zero byte filesize. It's easy at first to mistake this for a hardware problem (e.g., one or more bad sectors on the disk), but don't be fooled -- if the kernel is older than 2.4.25, it's likely a kernel FAT32 software issue.

When I encountered this problem I was running 2.4.20-46.7 ... RedHat still considers RH7.3 an officially supported distro and still provides regular patch updates for it -- addressing security issues, but apparently not addressing other critical kernel bugs like large FAT32 support ?! So I had to track-down and apply the appropriate kernel patches myself, and all is well now.


-- Jim

P.S. I just opened another LQ thread with details on where I found the patches etc. See:

All times are GMT -5. The time now is 05:30 PM.