LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 09-13-2017, 08:35 AM   #1
brancalessio
LQ Newbie
 
Registered: Aug 2004
Posts: 24

Rep: Reputation: 0
ext4 directory hole after writing (by mistake) a lot of files


Hi everyone!

I hope I write in the right forum.

I recently had a problem with my ext4 filesystem. Let's start from the beginning.

I had to write some program in Java (everything was compiled with the Oracle Java) and by mistake my program did the following:
  • read one byte from the source file
  • create a file (like dest0) and write that byte
  • read the next byte from the source file
  • create a file (like dest1) and write that byte
  • and so on...

By the time I realised that I made something wrong so many files were created that it was not possible to delete them simply with

Code:
rm -R dest*
Bash complained that the command line was too long! I had to use

Code:
find . -name "dest*" -exec rm {} \;
and it took a lot of time.

I could delete all the dest* files, but the filesystem seemed to be damaged after that.

Even though listing the content of the directory where the dest* files were create just showed sort of 10 items (files or directories), some programs took ages to list the content.

The I run first a fsck.ext4 and then a fsck.ext4 with the -D option (optimizes directories in filesystem), and it turned out that there was a "directory hole".

In principle, a directory in the ext4 file system can contain a quite huge number of files and I do not think I reached that number.

In any case the file system should not have been damaged just by this.

Does anyone know what could have happened?

My distribution is Linux Mint 17.2 Rafaela, with the default kernel 3.16.0-38-generic #52~14.04.1-Ubuntu SMP. I used e2fsck 1.42.9 (4-Feb-2014).

Thank you for your answers!
 
Old 09-13-2017, 09:38 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,849

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
probably you can try:
Code:
tree <dir>
find <dir>
to see what is inside.
 
Old 09-13-2017, 10:44 AM   #3
brancalessio
LQ Newbie
 
Registered: Aug 2004
Posts: 24

Original Poster
Rep: Reputation: 0
The content is actually fine now.

My question is another one: why did that happen? Why did the kernel not crash the program instead letting it corrupt the file system?
 
Old 09-13-2017, 12:01 PM   #4
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
You probably created the problem by trying to create thousands of files in a single directory. If you know that you are going to create a very large number of files, you should arrange for a directory structure to partition the collection in some reasonable-to-you way.
 
Old 09-13-2017, 01:09 PM   #5
brancalessio
LQ Newbie
 
Registered: Aug 2004
Posts: 24

Original Poster
Rep: Reputation: 0
Yes, but it seems that ext4 has not particular limitation about the number of files or directories in a given directory. The only limit is global (as far as I understand) and in the use of inodes.

https://kernelnewbies.org/Ext4
https://en.wikipedia.org/wiki/Ext4

My program should have been stopped because there were no more inodes instead of damaging the file system.
 
Old 09-13-2017, 03:10 PM   #6
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
Quote:
Originally Posted by brancalessio View Post
My program should have been stopped because there were no more inodes instead of damaging the file system.
I would then encourage you to open a trouble-ticket with the Linux developers. It would probably be most informative to them if you could construct a test-script which (in a freshly-made VM, of course ...) consistently replicates the problem using a current distro. (i.e. "install a brand-new Linux on a brand-new VM, run this script, and 'presto, every single time.'")
 
Old 09-13-2017, 03:44 PM   #7
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,140

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
Directories support a number of file entries in a single-level list. As you add more files, the number of blocks and the complexity of the inode structure grows. Your directory structure is probably huge, with all of the middle entries empty. To fix this, create a new empty directory (mkdir), move the files from your old directory to the new directory (mv), delete the old directory (rmdir), then move the new directory to the old name (mv).
 
Old 09-15-2017, 09:07 AM   #8
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,659
Blog Entries: 4

Rep: Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941Reputation: 3941
Quote:
Originally Posted by smallpond View Post
Directories support a number of file entries in a single-level list. As you add more files, the number of blocks and the complexity of the inode structure grows. Your directory structure is probably huge, with all of the middle entries empty. To fix this, create a new empty directory (mkdir), move the files from your old directory to the new directory (mv), delete the old directory (rmdir), then move the new directory to the old name (mv).
But if, as the OP suggests, a filesystem data-structure error was produced by what s/he did, then this probably should be reported as a bug i-f it is readily reproducible.
 
Old 09-15-2017, 10:12 AM   #9
brancalessio
LQ Newbie
 
Registered: Aug 2004
Posts: 24

Original Poster
Rep: Reputation: 0
Thank you all for your answers. I will try to contact the people that develop ext4 and I will let you know!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
glitch bloats root directory by excessive writing to log files wolecki Linux - Software 1 10-10-2013 02:35 PM
[SOLVED] Recover deleted files ext4 home directory BeaverusIV Linux - Newbie 4 09-23-2012 03:35 PM
[SOLVED] Data Recovery, formatted my ext4 as vfat by mistake Halobok Linux - General 11 02-13-2012 02:48 AM
I'm writing a CHIP-8 emulator, and have a lot of questions. Anikom15 Programming 13 12-23-2009 11:47 PM
ext3 used as ext4 by mistake.. Any way to recover my data? Please help. DigiH Linux - Desktop 9 11-27-2009 02:27 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 07:26 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration