LinuxQuestions.org
Go Job Hunting at the LQ Job Marketplace
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices

Reply
 
Search this Thread
Old 02-05-2013, 07:44 AM   #1
plesset
LQ Newbie
 
Registered: Oct 2008
Location: Reykjavik, Iceland
Distribution: openSUSE 12-2, Debian-Wheezy, Windows 7
Posts: 17

Rep: Reputation: 0
Performance difference between sharing disk space via NFS or SSHFS


I have a "poor man's" cluster, i.e. 10 boxes connected via 1Gb/s switch and am trying to use for simple MPI calculations. So far I have used sshfs to get a shared disk space on the machines but it seems that the calculations scale worse than I expected. Is there a huge penalty with using sshfs rather than sshfs?
 
Old 02-05-2013, 10:29 AM   #2
em31amit
Member
 
Registered: Apr 2012
Location: /root
Distribution: Ubuntu, Redhat, Fedora, CentOS
Posts: 190

Rep: Reputation: 55
@plesset, Please correct your question, It is confusing. Performance is Degraded in which ? nfs or sshfs.

sshfs is actually using ssh protocol so all the traffic is moving in encryption and decryption cycle so there must be a penalty on performance.
whereas nfs send data in cleartext so performance always better in compare of sshfs.
 
Old 02-05-2013, 11:16 AM   #3
plesset
LQ Newbie
 
Registered: Oct 2008
Location: Reykjavik, Iceland
Distribution: openSUSE 12-2, Debian-Wheezy, Windows 7
Posts: 17

Original Poster
Rep: Reputation: 0
Sorry for the confusion. I'm using sshfs (since it is easy and I know how to) but I find the performance to be disappointing. So I think I should be using NFS but I'm not sure how to configure it, especially since all the machines are directly linked to the internet. Will using hosts.allow & hosts.deny be enough? I could also ask the network admin to block all access to all but one. Now there is only ssh access to the machines.
 
Old 02-05-2013, 02:16 PM   #4
suicidaleggroll
Senior Member
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 2,843

Rep: Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006
You can specify which IP address(es) you want to allow NFS connections from in /etc/exports. Having the machine on the internet shouldn't be a problem as long as you don't set up /etc/exports to allow NFS connections from anybody.
 
Old 02-06-2013, 02:22 AM   #5
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: FreeBSD, Debian, Mint, Puppy
Posts: 3,287

Rep: Reputation: 173Reputation: 173
NFS is dead easy to set up.
It's very reliable and efficient.

You have a private sub net connecting the machines just specify
connection only on that. i would assume you will have a central data bank
and connect your calculators to that.
 
Old 02-06-2013, 01:28 PM   #6
lupe
Member
 
Registered: Dec 2008
Distribution: Slackware
Posts: 35

Rep: Reputation: 2
Have you read about GlusterFS
For what seems to be your purpose, it seems like a good alternative do nfs and sshfs.
 
Old 02-08-2013, 07:50 AM   #7
plesset
LQ Newbie
 
Registered: Oct 2008
Location: Reykjavik, Iceland
Distribution: openSUSE 12-2, Debian-Wheezy, Windows 7
Posts: 17

Original Poster
Rep: Reputation: 0
I installed NFS and map the shared space as
Code:
toppond:/home/xxxx/fds /home/xxxx/fds nfs rw,rsize=16384,wsize=16384,hard,intr,async,nodev,nosuid 0 0
in fstab.


When I test it I actually find it to be slower than SSHFS (especially when writing)...
Code:
> time dd if=/dev/zero of=/home/xxxx/fds/test bs=16k count=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 4.91929 s, 54.6 MB/s

real    0m5.015s
user    0m0.000s
sys     0m0.160s

time dd if=/home/xxxx/fds/test of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.31482 s, 116 MB/s

real    0m2.317s
user    0m0.008s
sys     0m0.120s
Code:
> sudo umount fds
[sudo] password for xxxx: 

> sshfs toppond:/home/xxxx/fds fds

> time dd if=/dev/zero of=/home/xxxx/fds/test bs=16k count=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 3.42332 s, 78.4 MB/s

real    0m3.457s
user    0m0.000s
sys     0m0.244s

> time dd if=/home/xxxx/fds/test of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 3.53909 s, 75.8 MB/s

real    0m3.560s
user    0m0.004s
sys     0m0.072s

Where have I gone wrong?
 
Old 02-08-2013, 08:44 AM   #8
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: FreeBSD, Debian, Mint, Puppy
Posts: 3,287

Rep: Reputation: 173Reputation: 173
well there you go.
seems quite fast to me.
why do you think it's wrong? it is what it is.
 
Old 02-08-2013, 09:11 AM   #9
schneidz
Senior Member
 
Registered: May 2005
Location: boston, usa
Distribution: fc-15/ fc-20-live-usb/ aix
Posts: 4,010

Rep: Reputation: 624Reputation: 624Reputation: 624Reputation: 624Reputation: 624Reputation: 624
i prefer sshfs because it is almost as fast as nfs and easier to use (it can also mount directories outside of your network).

maybe the pc's (updates/load) or routers (firmware) or nic's (drivers) are just slow (bad cables) ?

this thread seems to be slow/ bad hard drives which is hampering network performance:
http://www.linuxquestions.org/questi...ts-4175449064/

Last edited by schneidz; 02-08-2013 at 09:53 AM.
 
Old 02-08-2013, 10:07 AM   #10
plesset
LQ Newbie
 
Registered: Oct 2008
Location: Reykjavik, Iceland
Distribution: openSUSE 12-2, Debian-Wheezy, Windows 7
Posts: 17

Original Poster
Rep: Reputation: 0
Thank you all. At least I have gone through and it is always beneficial to learn something new and now I have the option to use both. Given the hassle with setting up/configuring NFS, as compared to SSHFS, I assumed the gain would be greater.
 
Old 02-08-2013, 10:10 AM   #11
suicidaleggroll
Senior Member
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 2,843

Rep: Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006Reputation: 1006
I wouldn't call it done right there, from the looks of it you only tried one small transfer with one block size. Try a larger file (at least 1GB+) and try different block sizes to see how that affects things. Also do each test a few times to make sure the results are consistent.

Also make sure there is no other I/O on either system during the tests if you want the most fair comparison.

Last edited by suicidaleggroll; 02-08-2013 at 10:14 AM.
 
Old 09-20-2014, 05:03 AM   #12
postcd
Member
 
Registered: Oct 2013
Posts: 263

Rep: Reputation: Disabled
There is SSHFS/NFS read/write benchmark: http://prokop.uek.krakow.pl/projects/fs_benchmark.html
 
  


Reply

Tags
mpi, nfs, performance, sshfs


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
du vs df -- Huge difference; Disk Space Vanishing zok Linux - General 5 09-04-2012 02:13 PM
Sharing a file system containing nfs- and sshfs-mounts (Centos 5.5 64 Bit) geolino Linux - Server 0 02-07-2011 06:36 AM
sharing of disk space using NFS akhtar.bhat Linux - Networking 1 01-30-2008 11:07 PM
Kernel 2.6 sharing USB disk via NFS 1kyle Linux - Networking 1 07-19-2004 06:59 AM
NFS disk space limitation jonfa Linux - General 1 11-28-2001 01:46 PM


All times are GMT -5. The time now is 01:22 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration