The Best Way to support 100 client PCs to transfer files to one Server?
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The Best Way to support 100 client PCs to transfer files to one Server?
Hi all,
I am evaluating couples of ways to improve the file transferring from about 100 client Linux Fedora PCs to one Windows or Linux Server via local network.
1. NFS
2. Samba
3. iSCSI
4. Others
Which way is the best model to support huge client PCs to transfer large datas to one Server at least 100pcs no problem though 100M network?
I got that Windows couldn't support NFS v4 for the moment. I can switch to Linux Server but what's the best way in practice not only in theory?
I would be generally looking for a dynamic solution with simple network interface. Generically, I tend to try to use scp / sftp if possible. It's all on a single TCP port etc. I would not really want to use NFS if it's for scheduled jobs, as, without the additional hassle of automount, the shares will always need to be mounted etc, which might not be something you can rely on etc. And NFSv3 is pretty ugly / complicated traffic flow wise.
What are your concerns about doing it with 100 systems though? There are other factors about this you've not discussed. Is this a push or a pull? Do you have control over it being one or another? How often is this happening? What makes it happen? Do you need to poke each machine somehow to make it happen? how large are these files? 1x1gb or 100x10mb or 10000x100kb?
using standard tools sftp (with preshared keys) on a cron job would be a good fit in my book, but then I would be interested in managing the ongoing configuration of this setup using a tool like puppet, as it's excellent. If you want to trigger these things centrally, mcollective might also be interesting to you, which integrates with puppet very well.
Last edited by acid_kewpie; 01-20-2012 at 09:11 AM.
I would be generally looking for a dynamic solution with simple network interface. Generically, I tend to try to use scp / sftp if possible. It's all on a single TCP port etc. I would not really want to use NFS if it's for scheduled jobs, as, without the additional hassle of automount, the shares will always need to be mounted etc, which might not be something you can rely on etc. And NFSv3 is pretty ugly / complicated traffic flow wise.
Ummm, I got the painful experience to deal with 100 PCs to mount one Windows NFS Server so as to execute the same main program from the Server on each client PC. 20 ~ 30 PCs shall be fine but it would be stuck sometimes once the client PC reaches to around 100. Program is very slow to response and lots of time out issues occurred. Now I am thinking of copying the program to local client PC and check the sync with Server every time the main program executed on each client PC.
Quote:
Originally Posted by acid_kewpie
What are your concerns about doing it with 100 systems though? There are other factors about this you've not discussed. Is this a push or a pull? Do you have control over it being one or another? How often is this happening? What makes it happen? Do you need to poke each machine somehow to make it happen? how large are these files? 1x1gb or 100x10mb or 10000x100kb?
Actually, the logs created on each PC are copied to the NFS Server as well via NFS way. The data is in 10000x100kb on each PC. It would be easy to use one script to mount the Server on each client PC before starting the main program from NFS Server. This way, logs are copied to the Server in real time. But it caused high network traffic and the log transfer model is heavy related to network stability.
Quote:
Originally Posted by acid_kewpie
using standard tools sftp (with preshared keys) on a cron job would be a good fit in my book, but then I would be interested in managing the ongoing configuration of this setup using a tool like puppet, as it's excellent. If you want to trigger these things centrally, mcollective might also be interesting to you, which integrates with puppet very well.
I will check puppet to see whether it can meet my expectation to deal with log transferring process.
well puppet won't help you transfer the files, it will let you deliver configuration to the clients, pull various strings etc, but it wouldn't be involved in the live running of the application, only putting it in place, managing possible cron jobs etc..
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.