Hi everyone!
I hope I write in the right forum.
I recently had a problem with my ext4 filesystem. Let's start from the beginning.
I had to write some program in Java (everything was compiled with the Oracle Java) and by mistake my program did the following:
- read one byte from the source file
- create a file (like dest0) and write that byte
- read the next byte from the source file
- create a file (like dest1) and write that byte
- and so on...
By the time I realised that I made something wrong so many files were created that it was not possible to delete them simply with
Bash complained that the command line was too long! I had to use
Code:
find . -name "dest*" -exec rm {} \;
and it took a lot of time.
I could delete all the dest* files, but the filesystem seemed to be damaged after that.
Even though listing the content of the directory where the dest* files were create just showed sort of 10 items (files or directories), some programs took ages to list the content.
The I run first a fsck.ext4 and then a fsck.ext4 with the -D option (optimizes directories in filesystem), and it turned out that there was a "directory hole".
In principle, a directory in the ext4 file system can contain a quite huge number of files and I do not think I reached that number.
In any case the file system should not have been damaged just by this.
Does anyone know what could have happened?
My distribution is Linux Mint 17.2 Rafaela, with the default kernel 3.16.0-38-generic #52~14.04.1-Ubuntu SMP. I used e2fsck 1.42.9 (4-Feb-2014).
Thank you for your answers!