LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 02-19-2011, 03:30 AM   #16
disturbed1
Senior Member
 
Registered: Mar 2005
Location: USA
Distribution: Slackware
Posts: 1,133
Blog Entries: 6

Rep: Reputation: 224Reputation: 224Reputation: 224

I'm not really sure what's going on here
You've tested/eliminated the obvious things (Nics, Cables, Protocols, Switch).
I did notice that Debain Stable/Testing is using 2.6.32.x, Sid is pushing 2.6.37. It might be worth exploring an updated kernel, or even using the upstream driver from Realtek themselves. With my onboard rtl81xx chips, I had to use this driver in the past because of Linux's failure to properly support ASPM on the chips. Something Intel also has a problem with (only a small number of chips), not to mention their EPROM issues and .... (Intel is not as perfect as one would hope ) At least with Intel you can get quick support, and their devs are fast, eager, and precise to fix issues.

I'm not convinced it's an issue with the r8169 module. Something tells me if it was, then even single file transfers would be slow. Ubuntu uses a newer kernel that has not just a newer r8169 module, but also refinements to nfs and ext4.

I do have this Intel card in my arsenal -
Code:
01:04.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
And sure it is better than the built in r8168's and $5 r8169 PCI cards. $30 better? Not in a home environment

I ran a bunch of tests here. Using a ~2GiB file, and a directory with +/- 100 files and 2 sub directories. Between 2 RTL8111/8168B NICs. Using SCP, NFS, and CIFs. SCP was the fastest (~73MiB/s) NFS and CIFS were near identical. Single file transfers where ~70MiB/s, directory transfers where ~65MiB/s. The main difference is NFS and SCP use ~2% CPU, while CIFS hits upto 30% CPU. My exports are formatted to JFS.
iostat reported 98% utilization on the sender and 60% utilization on the receiver.

There are some tools available to see where your bottleneck is.
iostat
nfsstat
rpcinfo


At this point, if I had to make a guess - Kernel issue, file system/mounting options.
I'm curious how another file system (ext3, jfs, or even xfs) would perform.
 
Old 02-21-2011, 08:01 AM   #17
explosive_tom
LQ Newbie
 
Registered: Dec 2008
Posts: 13

Original Poster
Rep: Reputation: 0
I'm up for trying a new filesystem. I am currently using md (raid 1) with two disks formatted as ext4. Which FS would you recommend I go for? Would JFS be worth a try?

Last edited by explosive_tom; 02-21-2011 at 08:05 AM.
 
Old 02-21-2011, 10:02 PM   #18
disturbed1
Senior Member
 
Registered: Mar 2005
Location: USA
Distribution: Slackware
Posts: 1,133
Blog Entries: 6

Rep: Reputation: 224Reputation: 224Reputation: 224
Sure JFS is a decent enough FS. They all have pros and cons. Cons to JFS - old, some would say unmaintained. No new features committed for years (Isn't that a PRO? ) Though IBM has, and will ssh into your system and do repairs in special cases. The ML is not full of activity, but questions are answered in a timely manner.

ext3 is still considered the standard and reliable FS.
ext4 is the replacement for ext3. Though still under active development. Some do, and some do not, consider ext4 production stable. Though it is far more stable and feature complete than in previous kernels.
xfs and reiserfs are other choices.

It's just personal choice. EXT4 has/had notable problems with some applications. Like degraded sql operations ... Kernel oops with NFS (fixed) But other FS have their own limitations as well

I use ext4 for most partitions, but the server uses JFS because it's an older data drive that was formatted long ago before ext4 was considered stable, on an under powered system. JFS uses less CPU than other FS's, and has been bench marked as faster in most operations than most file systems.

Your situation could be an issue with FS, RAID, Kernel{NFS,NIC,MD} modules, or combination there of. I believe you've done more than enough to rule out an issue with hardware and connectivity.
 
Old 02-22-2011, 01:42 PM   #19
explosive_tom
LQ Newbie
 
Registered: Dec 2008
Posts: 13

Original Poster
Rep: Reputation: 0
Well I have just created a 10GB md raid 1 JFS parition and I'm getting about 20mb/sec for small and large files now :s !
Although now it goes at 20mb/sec for 10seconds appears to stall for 10seconds then repeats in a stop/start fashion.
I don't know where to go from here at all - I've now lost performance with large files but gained some with small.
 
Old 02-22-2011, 03:43 PM   #20
besolius
LQ Newbie
 
Registered: Feb 2011
Posts: 13

Rep: Reputation: 1
the answer is pretty simple....

if you have files no larger than 2 MB then the resulting speed will be no more than 2MB/sec because it will not have the need for more speed....and for the other guys who tell you that you need a gigabit router ... myy answer is this...as long as the files are transmited in the same network the router doesn't care less...because it is finding the other computer by it's L2 address...witch is the mac address...coming back to your question...if you have 200 files each one maxing at about 2-3 MB then you will have 2-3 MB/sec...that is why you observed the increase in speed with one large file....the same thing applies when you are moving files from one hdd to another hdd/partition...I hope it was helpful
 
Old 02-22-2011, 03:53 PM   #21
besolius
LQ Newbie
 
Registered: Feb 2011
Posts: 13

Rep: Reputation: 1
try to archive them before the transfer...even though the time it takes to do the archiving could be put to use in transfering the files
 
Old 02-22-2011, 04:27 PM   #22
explosive_tom
LQ Newbie
 
Registered: Dec 2008
Posts: 13

Original Poster
Rep: Reputation: 0
Can it not transfer files in parallel to increase speed though?
Surely it sees it has a block of data xMB in size (albeit made up of many small files) and transfers that data at the speed of the network link?

Last edited by explosive_tom; 02-22-2011 at 04:32 PM.
 
Old 02-22-2011, 10:21 PM   #23
disturbed1
Senior Member
 
Registered: Mar 2005
Location: USA
Distribution: Slackware
Posts: 1,133
Blog Entries: 6

Rep: Reputation: 224Reputation: 224Reputation: 224
Quote:
Originally Posted by besolius View Post
the answer is pretty simple....

if you have files no larger than 2 MB then the resulting speed will be no more than 2MB/sec because it will not have the need for more speed.
That's not correct at all. Though SCP will incorrectly report this speed, it's in error.

Let's say average transfer rate is 70MiB/s. A 140MiB file would take 2 secs, 70MiB file would take 1 sec, a 35MiB file would take 0.5 secs. If a 35MiB file transfers in 0.5 seconds the speed is 70MiB/s not 35MiB/s ... not exact because of packet overhead and ACK delay but you get the picture.

IF the network, latency, and PC are quick enough, a 2MiB file transfer is instant. Meaning less than a second. If a 2MiB file transfers in 0.05ms then clearly the speed is much quicker than 2MiB/s.

If each small file takes 1 second to transfer, then clearly something is wrong. 1000 files would take 1000 seconds!!!! There's an issue with network latency, kernel, cpu, .... somewhere.

Here's an example (across slow wireless )

Directory with 20 2MiB files. IF, following your explanation, the transfer rate will not be greater than 2MiB/s a second, then the transfer will be 20seconds.
Code:
$time scp -r 01 keith@backend:/home/keith/
keith@backend's password: 
2.16                                                                                        100% 2048KB   2.0MB/s   00:00    
2.02                                                                                        100% 2048KB   2.0MB/s   00:00    
2.08                                                                                        100% 2048KB   2.0MB/s   00:00    
2.03                                                                                        100% 2048KB   2.0MB/s   00:00    
2.09                                                                                        100% 2048KB   2.0MB/s   00:00    
2.12                                                                                        100% 2048KB   2.0MB/s   00:00    
2.10                                                                                        100% 2048KB   2.0MB/s   00:00    
2.14                                                                                        100% 2048KB   2.0MB/s   00:00    
2.06                                                                                        100% 2048KB   2.0MB/s   00:00    
2.17                                                                                        100% 2048KB   2.0MB/s   00:00    
2.20                                                                                        100% 2048KB   2.0MB/s   00:00    
2.05                                                                                        100% 2048KB   2.0MB/s   00:00    
2.04                                                                                        100% 2048KB   2.0MB/s   00:00    
2.11                                                                                        100% 2048KB   2.0MB/s   00:00    
2.15                                                                                        100% 2048KB   2.0MB/s   00:00    
2.18                                                                                        100% 2048KB   2.0MB/s   00:00    
2.07                                                                                        100% 2048KB   2.0MB/s   00:00    
2.01                                                                                        100% 2048KB   2.0MB/s   00:00    
2.19                                                                                        100% 2048KB   2.0MB/s   00:00    
2.13                                                                                        100% 2048KB   2.0MB/s   00:00    

real	0m16.783s
user	0m3.413s
sys	0m0.937s
If transfer time is less than / equal to file size, scp is reporting the speed as being equal to file size. Not correct.

Last edited by disturbed1; 02-22-2011 at 10:31 PM. Reason: better example
 
Old 02-23-2011, 12:24 AM   #24
disturbed1
Senior Member
 
Registered: Mar 2005
Location: USA
Distribution: Slackware
Posts: 1,133
Blog Entries: 6

Rep: Reputation: 224Reputation: 224Reputation: 224
Could you post your nfssat / iostat on the server to see where you are I/O bound. nfsstat will show you where your time is being spent. I'd suspect you're being hit with attribute caching.

Try this mount options -
Code:
intr,async,tcp,ac
ac forces client side attribute caching.
 
Old 02-25-2011, 03:08 PM   #25
explosive_tom
LQ Newbie
 
Registered: Dec 2008
Posts: 13

Original Poster
Rep: Reputation: 0
I have been doing more tests. I did a crazy thing is desperation and bought myself two Intel Pro/1000 GT pci gigabit nics. One for my desktop and the other for the server. Sadly no real difference I get pretty good iperf results with an MTU of 9000 (around 880 mb/sec) but NFS is just as bad - in fact large files seems to have got worse as well

Looking at the system monitor while I do a copy the network only bursts speed (to about 25mb/s) every few seconds and seems to stop imbetween.

nfsstat on server:
Code:
tom@marvin:~$ sudo nfsstat
Server rpc stats:
calls      badcalls   badauth    badclnt    xdrcall
25713      0          0          0          0       

Server nfs v3:
null         getattr      setattr      lookup       access       readlink     
5         0% 22        0% 7113     27% 120       0% 160       0% 0         0% 
read         write        create       mkdir        symlink      mknod        
11        0% 13409    52% 2341      9% 61        0% 0         0% 0         0% 
remove       rmdir        rename       link         readdir      readdirplus  
0         0% 0         0% 8         0% 0         0% 0         0% 14        0% 
fsstat       fsinfo       pathconf     commit       
21        0% 5         0% 2         0% 2348      9% 

Server nfs v4:
null         compound     
2        40% 3        60% 

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access       close        commit       
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
create       delegpurge   delegreturn  getattr      getfh        link         
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
lock         lockt        locku        lookup       lookup_root  nverify      
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
open         openattr     open_conf    open_dgrd    putfh        putpubfh     
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
putrootfh    read         readdir      readlink     remove       rename       
3       100% 0         0% 0         0% 0         0% 0         0% 0         0% 
renew        restorefh    savefh       secinfo      setattr      setcltid     
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
setcltidconf verify       write        rellockowner bc_ctl       bind_conn    
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
exchange_id  create_ses   destroy_ses  free_stateid getdirdeleg  getdevinfo   
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
getdevlist   layoutcommit layoutget    layoutreturn secinfononam sequence     
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
set_ssv      test_stateid want_deleg   destroy_clid reclaim_comp 
0         0% 0         0% 0         0% 0         0% 0         0% 

Client rpc stats:
calls      retrans    authrefrsh
0          0          0
iostat on server
Code:
tom@marvin:~$ sudo iostat
Linux 2.6.32-5-amd64 (marvin) 	25/02/11 	_x86_64_	(4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.11    0.00    0.31    0.52    0.00   99.07

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               9.04         0.49      1299.07       2424    6413108
sdb               9.04         0.50      1299.07       2454    6413108
md0              12.92         0.60      1299.05       2938    6413000
sdc               1.64        61.69        20.15     304566      99452
These were after copying a 1.2GB folder of ~2/3mb images with the nfs mount options you suggested in your previous post
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
gigabit ethernet really slow g_k Linux - Networking 10 04-07-2010 08:15 PM
gigabit ethernet slow Abida54 Linux - Networking 9 01-03-2010 02:51 PM
Gigabit ethernet too slow. alex6666 Linux - Hardware 3 12-13-2008 08:34 AM
Slow gigabit ethernet reggie Linux - Networking 3 05-20-2007 08:06 AM
Broadcom Gigabit Ethernet Slow with Suse 9.3 kuhazor Linux - Hardware 4 07-17-2005 11:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 09:55 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration