Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I will be making almost entirely Linux to Linux connections for sharing files. I have one big file server and lots of client machines that will read from & write to the file server at the same time - usually large amounts of data. I've been running NFS in sync mode, but have had BIG speed issues (way too slow), so I'm thinking about switching to Samba. I've tried running NFS in async mode for speed, but have run into sync problems as one machine expects a file that another machine (or even the same machine) has written, but it's not complete yet - causing the whole operation to fail.
Is there a reason that I shouldn't switch to Samba?
Is NFS superior or inferior in certain aspects, but not in others?
FYI: All my machines have GigE & run through a single GigE switch. The file server runs RAID 5 on 4 drives, has a separate system drive, and a good amount of RAM, so it should be able to keep up with pretty high network traffic.
i have a very little 100mb network. u use samba cos there are win macines in my network. it works very good. maybe there is a trouble with your network not NFS. have u checked your network? at least type ifconfig to see if there are some error packets. so u will be sure if there s an trouble with NFS.
Samba seems to operate at about the same speed as NFS in sync mode. fooey.
This is incredibly frustrating. I can transfer files via ftp or scp MUCH faster than through Samba or NFS - if NFS or Samba were on par with ftp or scp, I'd be happy. I'm sure there's some sort of something I'm not doing correctly that's killing my speed.
Originally posted by MS3FGX Well, FTP is usually faster than Samba of NFS, just because it is a simpler protocol.
But what kind of speed are we talking about here? How long does it take to transfer 100MB, and how long for 1 GB?
It is hard to say if anything is really wrong unless we can get an idea of how fast (or slow) transfers really are.
I've recently moved the file server from RH7 to RH9. On the RH7 box, the fastest speed I ever saw under NFS was about 22-25 seconds for 256 MB (I think that was in async mode as well). Now I'm seeing 8-9 seconds for the same file (in NFS sync mode) & just over 30 seconds for 1 GB.
That's not such a bad speed & quite a bit better than before, but still is running around 25-28MB/s. 25-28 MB/s on a 1 Gb/s network seems on the slow side. I understand that 1 Gb/s means 100 MB/s, but shouldn't I be seeing a little bit better than what I am?
There is some overhead, but it does seem like you should be getting a bit faster speeds than that. You aren't even hitting 50% of the theoretical maximum.
You are sure that all of the machines are using Cat6? Perhaps a Cat5 got in there?
Originally posted by MS3FGX There is some overhead, but it does seem like you should be getting a bit faster speeds than that. You aren't even hitting 50% of the theoretical maximum.
You are sure that all of the machines are using Cat6? Perhaps a Cat5 got in there?
Everything is using CAT 5e excpet the file server, which is using CAT 6.
The switch is made by Netgear - it's unmanaged - just a basic 24 port GigE switch. Not the best in the world, but I would assume that it would at least hit 50%.
I don't think you said this - do you run v2 or v3?
I'm running a similar setup, gig network, several file servers, and I have seen NFS transfer speeds of more than 50MB/s. I think with the complexity of NFS, that is close to the max you can get.
By the way, I use the async option, and the clients have the "hard" option, and I don't seem to have problems with NFS errors, ever, and I shuffle a lot of data.
Sounds like you have enough internal bandwidth in your server for the disks, but another possibility is that your network card and the raid card share a PCI bus, then you'd get some arbitration issues when you have lots of connections via NFS to different files (as opposed to a single file access with ftp which you said worked fine).
Finally, how many nfsd threads are you setting up?
For more advice, we'd need to see some config facts...
Originally posted by mlp68 I don't think you said this - do you run v2 or v3?
I'm running a similar setup, gig network, several file servers, and I have seen NFS transfer speeds of more than 50MB/s. I think with the complexity of NFS, that is close to the max you can get.
By the way, I use the async option, and the clients have the "hard" option, and I don't seem to have problems with NFS errors, ever, and I shuffle a lot of data.
Sounds like you have enough internal bandwidth in your server for the disks, but another possibility is that your network card and the raid card share a PCI bus, then you'd get some arbitration issues when you have lots of connections via NFS to different files (as opposed to a single file access with ftp which you said worked fine).
Finally, how many nfsd threads are you setting up?
For more advice, we'd need to see some config facts...
// on the client side:
# cat /etc/fstab
uranium:/mnt/farm /mnt/farm nfs bg,noac,rw,hard,rsize=16384,wsize=16384 0 0
So...
nfs v2 & v3
8 nfsd threads
re: async - I didn't really run into any nfs errors with async, I ran into problems with files not being completely written before they were accessed.... This is a render farm. Some of our renders create a shadow pic for each frame which then gets immediately used by the frame being rendered. The shadow pics weren't done writing before the renderer locked the file to read it - which made the file invalid & unusable. Result: no shadows. Switching back to sync fixed the problem but made the nfs writes horribly slow. (this was on the old RH7 file server - haven't run the test on the new box yet).
re: pci bus - I suppose it's possible & I'm not quite sure how to check. The NIC is built into the motherboard. The mobo runs the 865G chipset with ICH5 & Intel 82547EZ PLC chip for Gigbit LAN. The RAID card is plugged into a PCI slot on the board.
Thanks again for taking a look at this. ANY suggestions are greatly appreciated (being that I don't know of many other resources to help at this point).
Well, yes, I think the sync bit gets you. Just for completeness, see what you get by adding a nfsvers=3 to the clients, and increase the r/wsize to 32k. I think that when given a choice, the partners negotiate v3, but you want to make sure.
lspci -v will list who's who on which bus.
I don't know what renderer you are using, but maybe you can provide some external means to control the access? Some lock file, or moving it into a different directory or rename it to indicate a file is done, etc. Just out of curiosity, you seem to farm out individual frames. Could you process one frame beginnig to end on one machine and move the final image to the NFS area?
Originally posted by mlp68 Well, yes, I think the sync bit gets you. Just for completeness, see what you get by adding a nfsvers=3 to the clients, and increase the r/wsize to 32k. I think that when given a choice, the partners negotiate v3, but you want to make sure.
lspci -v will list who's who on which bus.
I don't know what renderer you are using, but maybe you can provide some external means to control the access? Some lock file, or moving it into a different directory or rename it to indicate a file is done, etc. Just out of curiosity, you seem to farm out individual frames. Could you process one frame beginnig to end on one machine and move the final image to the NFS area?
Thanks for the suggestions - I'll give 'em a try later this evening.
as far as rendering - we're using Mantra (Houdini's renderer). It is possible to move the scene description to the local machine, render it, then move the rendered frame to the file server. I'm not sure if that would help things out speedwise being that the moves would have to complete before the render or next frame could start as opposed to reading, writing & rendering at the same time & with scene description files that often get upwards of 50-60MB, it might be a considerable bottleneck (even worse if the frame requires multiple shadow pics, zdepth pics, etc.). Granted, I could use async in that case. Worth a try, for sure.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.