Linux - Server This forum is for the discussion of Linux Software used in a server related context. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
04-02-2011, 03:20 AM
|
#1
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Rep:
|
There's a file - but apparently, there isn't ???
Hi everyone,
After a server hang, I ran into this issue on an XFS filesystem after restart on a cciss device (HP ProLiant server).
FSCK gave a "clean bill of health" to the partition at boot, then automatically mounted it. Later when I take a closer look I see a directory with the following content, which cannot be removed, read, overwritten, statted, appended to or otherwise manipulated in any manner, not even as root.
Code:
[root@sheridan atemp22]$ ls -lah
ls: cannot access atemp.dat: No such file or directory
total 4.0K
drwxrwxrwx 2 sheridan sheridan 27 Apr 2 09:57 .
drwxr-xr-x 3 root root 4.0K Apr 1 01:09 ..
?????????? ? ? ? ? ? atemp.dat
Repeated runs of XFS repair util yields that the filesystem is clean and there's nothing else it can do. ...
Something's definitely screwed with my filesystem IMHO. Can you help?
|
|
|
04-02-2011, 03:32 AM
|
#2
|
LQ Veteran
Registered: Sep 2003
Posts: 10,532
|
Hi,
A line like shown above does point to a (partially) corrupt FS.
I don't use xfs myself, but the fsck.xfs man page points to: xfs_check and xfs_repair. Did you use these to check and possibly repair your xfs fs?
Hope this helps.
|
|
|
04-02-2011, 06:39 AM
|
#3
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Original Poster
Rep:
|
Quote:
Originally Posted by druuna
Hi,
A line like shown above does point to a (partially) corrupt FS.
I don't use xfs myself, but the fsck.xfs man page points to: xfs_check and xfs_repair. Did you use these to check and possibly repair your xfs fs?
Hope this helps.
|
Unfortunately - yes. Ran them both several times while unmounted, and the output was nothing that would suggest that anything was found/done. And the problem still persists... That's why I wrote here I'm afraid :-(
|
|
|
04-05-2011, 11:06 AM
|
#4
|
Senior Member
Registered: Dec 2004
Location: Marburg, Germany
Distribution: openSUSE 15.2
Posts: 1,339
|
I saw such output only on NFS clients where the permissions were drwxr--r-- for the directory. Sure, without x there shouldn’t be an output at all. But it looks like the way it’s implemented. I don’t whether it’s related to your XFS case.
The file in question can still be accessed?
Last edited by Reuti; 04-05-2011 at 11:07 AM.
Reason: Got token expired before.
|
|
1 members found this post helpful.
|
04-06-2011, 03:21 AM
|
#5
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Original Poster
Rep:
|
Nope, it can't be.
It seems that the file doesn't "exist" at all. Let me explain:
let say I have this file with this "???" symptom named "abc.sh". If I try to tail or cat it, it won't work. If I do "touch abc.sh", then the output of "ls -lah" will show TWO files with the exact SAME names and same permissions (no question marks on either!). (!!!) If I then try to do anything with that file name, the new file will be accessed. I can acc content and remove it if I wish, but the old file will remain. If I issue an "rm -f" for the file, the old file will be still there with the "??" marks even after I delete the new one.
OK let me visualize what happens when I try to create a new file with the same name like described:
Code:
[sheridan@sheridan atemp22]$ ls -lah
total 4.0K
drwxrwxrwx 2 sheridan sheridan 48 Apr 6 10:28 .
drwxr-xr-x 3 root root 4.0K Apr 2 23:21 ..
-rw-rw-r-- 1 sheridan sheridan 0 Apr 6 10:28 atemp.dat
-rw-rw-r-- 1 sheridan sheridan 0 Apr 6 10:28 atemp.dat
As you can see, the "old" file is "shadowing" every parameter of the "new" file, even the owner...
I can add content, blah blah... But then when I try to delete it, the result is:
Code:
[sheridan@sheridan atemp22]$ ls -lah
ls: cannot access atemp.dat: No such file or directory
total 4.0K
drwxrwxrwx 2 sheridan sheridan 27 Apr 2 09:57 .
drwxr-xr-x 3 root root 4.0K Apr 1 01:09 ..
?????????? ? ? ? ? ? atemp.dat
Last edited by Sheridan; 04-06-2011 at 03:35 AM.
|
|
|
04-06-2011, 04:26 AM
|
#6
|
LQ Veteran
Registered: Sep 2003
Posts: 10,532
|
Hi,
You cannot create 2 files with the same name (in the same directory).
If you _are_ able to do this then 2 scenario's come to mind:
1) The names aren't actually the same (a hidden/control character is present in either of the 2), or
2) Your File System is corrupt.
I'm assuming the second is true in your case (corrupt inode table?).
Besides the answer given by me in post #2, which you say did not find/fix anything, I wouldn't know how to fix this.
|
|
1 members found this post helpful.
|
04-06-2011, 04:56 AM
|
#7
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
Use lsof to determine what program accesses the atemp.dat file. Is the file something that is needed.
Having two files with the same name affected sounds like a hard linked file, which is really just two file directory entries for the same file, which again implies filesystem or disk corruption.
If this is the only file affected, and you don't need it, you can try to access it via it's inode handle. Look at "ls -i" to list the inodes. Then use find to delete it.
find ./ -inum 23242 -delete
Monitor the filesystem to see if weird permissions persist. You may need to back up the files, reformat, and restore. If the disk itself is going bad, you may need to restore from a new disk later on.
Last edited by jschiwal; 04-06-2011 at 05:07 AM.
|
|
1 members found this post helpful.
|
04-06-2011, 04:57 AM
|
#8
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Original Poster
Rep:
|
Quote:
Originally Posted by druuna
Hi,
You cannot create 2 files with the same name (in the same directory).
If you _are_ able to do this then 2 scenario's come to mind:
1) The names aren't actually the same (a hidden/control character is present in either of the 2), or
2) Your File System is corrupt.
I'm assuming the second is true in your case (corrupt inode table?).
Besides the answer given by me in post #2, which you say did not find/fix anything, I wouldn't know how to fix this.
|
I understand... Sine it seems there's nothing else I can do that I can think of without endangering data integrity, I believe I will just do a full backup of everything else and reformat from scratch. Without knowing the root cause however, I have no idea if this will happen again.
Thank you all for your kind help and understanding.
|
|
|
04-06-2011, 05:01 AM
|
#9
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Original Poster
Rep:
|
Quote:
Originally Posted by jschiwal
Use lsof to determine what program accesses the atemp.dat. Is it something that is needed.
Having two files with the same name affected sounds like a hard linked file, which is really just two file directory entries for the same file, which again implies filesystem or disk corruption.
If this is the only file affected, and you don't need it, you can try to access it via it's inode handle. Look at "ls -i" to list the inodes. Then use find to delete it.
find ./ -inum 23242 -delete
Monitor the filesystem to see if weird permissions persist. You may need to back up the files, reformat, and restore. If the disk itself is going bad, you may need to restore from a new disk later on.
|
Thanks! Tried it now.
Result:
ls: cannot access .... No such file or directory, then in place of inode I only see a "?"...
|
|
|
04-07-2011, 01:23 AM
|
#10
|
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.x
Posts: 18,441
|
Did you try to 'ls -i' the file, or just run 'ls -i' with no params. The latter is the correct way to do it. That will show the inode num for all files in current dir.
You can then use jschiwal's 'find ...' to delete by inum, or another method is to mv all good files out of the dir, then
then re-create dir and put good files back. Be very careful that rm cmd 
|
|
1 members found this post helpful.
|
04-07-2011, 01:29 AM
|
#11
|
Member
Registered: Aug 2007
Location: Hungary
Distribution: Fedora, CentOS
Posts: 91
Original Poster
Rep:
|
Quote:
Originally Posted by chrism01
Did you try to 'ls -i' the file, or just run 'ls -i' with no params. The latter is the correct way to do it. That will show the inode num for all files in current dir.
You can then use jschiwal's 'find ...' to delete by inum, or another method is to mv all good files out of the dir, then
then re-create dir and put good files back. Be very careful that rm cmd 
|
Hi chrism,
Yes I did use it in the above mentioned format (ie. without filename) as well as with filename, etc. I didn't get any inode numbers, only a "?" mark in place of inode number and the same error message as always.
As for rm -rf, it was considered, then I actually tried it since the data I guess is lost anyway, and it's not worth the effort to try to "deep-salvage" - I'm regenerating it even as we speak, tough it may take a few days. Anyway. No effect on recursive forced remove - since the file (or whatever it is) itself cannot be removed, a non-empty directory removal is not possible. :-(
Last edited by Sheridan; 04-07-2011 at 01:31 AM.
|
|
|
All times are GMT -5. The time now is 11:37 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|