Hello again!
It's unfortunate that's a part of the directory structure, however some facets of some problems just can't be helped.
However, if applications are doing a directory listing, then we might be able to speed things along.
Quote:
The issue is really server independent. ls command takes just as long to list from shell as via ftp. If i can make ls list faster (via cache or otherwise) that will fix the issue.
|
The downside is; An "ls" done at a terminal session versus an "ls" done from an FTP session are not one and the same in most cases.
FTP commands such as "ls", "cd", "dir", "put", "mput", etc are not called from a servers /usr/bin directory. They are functions of the FTP server daemon, which in turn executes the necessary system calls (like
fopen()). However, it is rather dependent on which FTP daemon you're using.
For example, if you're using the stock FTP daemon from a RedHat based Linux distribution, you're probably using
vsftpd. That particular FTP daemon has been touted as the "fastest" by many, however that's in data throughput and simultaneous users.
There's also ProFTPd, an FTP daemon that has a modular design, which allows for "plug-ins". It's configuration is very similar to Apache, so it can throw off those of us who aren't web developers.
Both of these FTP daemons are Free / Open Source, so there's no harm in giving them a shot.
But I digress. If you're looking to speed up directory listings of your data, you have to keep in mind that it's the FTP daemon itself that is responsible for reading/displaying the contents. So in order to speed up that process, you have to think "under" the FTP daemon. Down to the filesystem level.
Is the data hosted on a RAID5 array? (which offers fast reads, but okay-speed on writes.)
How about moving this directory to a
RAM disk? (that's right, dedicate a chunk of your RAM to a filesystem.) This option would require a solution to sync-up the contents back to the main hard disk(s), but it would definitely impact the read-performance of your filesystem.
However, there's a bit of thinking to do "above" the FTP daemon; you're clients.
Are they connecting with Active or Passive sessions?
Active sessions yield a slightly faster connection, due to the dedicated data port they setup for their connections.
Passive connections strictly rely on port 21. This basically translates into a "half-duplex" connection, but can slip through restrictive firewalls a lot easier.
Quote:
thanks for your input by the way (do you guys thank each other or is that assumed?
|
It's always nice when someone clicks the little "thumbs-up" icon at the bottom of our posts. A dose of the warm-fuzzies encourages techi-ramblings more often
I hope I've given you a little FTP-based food for thought, and I'm sure someone else will see this thread and chime in. After all the FTP protocol has been around since the late 70's if I remember correctly, and only a few features have been added since then (some RFC from 2007 was the last time, I think).
All in all, it's still one of the simplest, fastest ways to transfer files.
Have a good one!