Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I have been in the process of trying to replace a Windows NT 4.0 server with a Linux server for the past couple of weeks.
The Windows NT server has 923,176 files in a directory.
After getting everything worked out with Samba I started copying files over to it. At first the transfer was going pretty well with the network showing anywhere from 40-60% utilization (100Mbit ethernet). However, as more and more files were copied the rate started to slow.
I am assuming this is happening because Linux is trying to insert the new entries into some type of directory structure. I have tried copying the files to an NT server and there is no corresponding decrease in transfer speed as the number of files grows.
Is there any kind of file system for Linux that does not have this directory entry problem? Or is this problem related to something totally different? (kernel 2.4.18) I have used EXT3 and XFS file systems so far and neither seems to work well with large numbers of files.
I tried splitting these files up so that there were no more than 10,000 entries in any directory and the transfer speed remained up around 18GB per hour, but all of our existing software expects those files to be in the same directory.