File Systems and Numerous files per directory
I have been in the process of trying to replace a Windows NT 4.0 server with a Linux server for the past couple of weeks.
The Windows NT server has 923,176 files in a directory.
After getting everything worked out with Samba I started copying files over to it. At first the transfer was going pretty well with the network showing anywhere from 40-60% utilization (100Mbit ethernet). However, as more and more files were copied the rate started to slow.
I am assuming this is happening because Linux is trying to insert the new entries into some type of directory structure. I have tried copying the files to an NT server and there is no corresponding decrease in transfer speed as the number of files grows.
Is there any kind of file system for Linux that does not have this directory entry problem? Or is this problem related to something totally different? (kernel 2.4.18) I have used EXT3 and XFS file systems so far and neither seems to work well with large numbers of files.
I tried splitting these files up so that there were no more than 10,000 entries in any directory and the transfer speed remained up around 18GB per hour, but all of our existing software expects those files to be in the same directory.
Thanks for any assistance.