Performance difference between sharing disk space via NFS or SSHFS
I have a "poor man's" cluster, i.e. 10 boxes connected via 1Gb/s switch and am trying to use for simple MPI calculations. So far I have used sshfs to get a shared disk space on the machines but it seems that the calculations scale worse than I expected. Is there a huge penalty with using sshfs rather than sshfs?
|
@plesset, Please correct your question, It is confusing. Performance is Degraded in which ? nfs or sshfs.
sshfs is actually using ssh protocol so all the traffic is moving in encryption and decryption cycle so there must be a penalty on performance. whereas nfs send data in cleartext so performance always better in compare of sshfs. |
Sorry for the confusion. I'm using sshfs (since it is easy and I know how to) but I find the performance to be disappointing. So I think I should be using NFS but I'm not sure how to configure it, especially since all the machines are directly linked to the internet. Will using hosts.allow & hosts.deny be enough? I could also ask the network admin to block all access to all but one. Now there is only ssh access to the machines.
|
You can specify which IP address(es) you want to allow NFS connections from in /etc/exports. Having the machine on the internet shouldn't be a problem as long as you don't set up /etc/exports to allow NFS connections from anybody.
|
NFS is dead easy to set up.
It's very reliable and efficient. You have a private sub net connecting the machines just specify connection only on that. i would assume you will have a central data bank and connect your calculators to that. |
Have you read about GlusterFS
For what seems to be your purpose, it seems like a good alternative do nfs and sshfs. |
I installed NFS and map the shared space as
Code:
toppond:/home/xxxx/fds /home/xxxx/fds nfs rw,rsize=16384,wsize=16384,hard,intr,async,nodev,nosuid 0 0 When I test it I actually find it to be slower than SSHFS (especially when writing)... Code:
> time dd if=/dev/zero of=/home/xxxx/fds/test bs=16k count=16k Code:
> sudo umount fds Where have I gone wrong? |
well there you go.
seems quite fast to me. why do you think it's wrong? it is what it is. |
i prefer sshfs because it is almost as fast as nfs and easier to use (it can also mount directories outside of your network).
maybe the pc's (updates/load) or routers (firmware) or nic's (drivers) are just slow (bad cables) ? this thread seems to be slow/ bad hard drives which is hampering network performance: http://www.linuxquestions.org/questi...ts-4175449064/ |
Thank you all. At least I have gone through and it is always beneficial to learn something new and now I have the option to use both. Given the hassle with setting up/configuring NFS, as compared to SSHFS, I assumed the gain would be greater.
|
I wouldn't call it done right there, from the looks of it you only tried one small transfer with one block size. Try a larger file (at least 1GB+) and try different block sizes to see how that affects things. Also do each test a few times to make sure the results are consistent.
Also make sure there is no other I/O on either system during the tests if you want the most fair comparison. |
There is SSHFS/NFS read/write benchmark: http://prokop.uek.krakow.pl/projects/fs_benchmark.html
|
All times are GMT -5. The time now is 07:33 AM. |