LinuxQuestions.org
LinuxAnswers - the LQ Linux tutorial section.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices



Reply
 
Search this Thread
Old 10-29-2007, 02:31 PM   #1
bcg121
LQ Newbie
 
Registered: Oct 2007
Location: Pennsylvania
Posts: 19

Rep: Reputation: 0
Testing bonding performance with scp


I have two PCs. Each is running Fedora Core 5. Each PC has an Intel PRO/1000 PT dual port NIC. eth0 of PC#1 is directly connected to eth0 of PC#2, and eth1 of PC#1 is directly connected to eth1 of PC#2. I.e., there is no switch in the mix. eth0 and eth1 are bonded together on both PCs.

Bonding appears to be working, as the # of TX packets on eth0 + # of TX packets on eth1 = # of TX packets on bond0. Same for RX packets. They are pretty much evenly split between the two interfaces.

However, I'm trying to use scp and the time command to see if performance has improved.

bond0 on PC#1: 10.0.0.1
bond0 on PC#2: 10.0.0.2

The command I use to scp a bitmap file (between 10 and 50 MB) from PC#1 to PC#2 is:
time scp <filename> root@10.0.0.2:/home/images

I've tried balance-rr, with tcp_reordering at 3 and at 127.

The problem is that the time it takes to scp the file from one PC to the other is about the same when using bonding as it is when not using bonding, if not slightly slower).

Any idea what I am doing wrong? Please let me know if you need more information.
 
Old 10-30-2007, 12:40 PM   #2
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,362

Rep: Reputation: 172Reputation: 172
If your link is running faster than the drives can write, you will see this behavior. GigE by itself is faster than a lot of drives.
 
Old 10-30-2007, 02:28 PM   #3
bcg121
LQ Newbie
 
Registered: Oct 2007
Location: Pennsylvania
Posts: 19

Original Poster
Rep: Reputation: 0
Thanks. Does that make sense given the following data?

Without bonding:

Image Bytes sec MB/s
-------------------------------
Image #1 47023158 1.46 30.65
Image #2 13304886 0.65 19.57
Image #3 13304886 0.62 20.33
Image #4 19039286 0.77 23.62
Image #5 20431926 0.81 24.11
Image #6 36709430 1.22 28.67

With bonding (balance-rr with tcp_reordering at 127):

Image Bytes sec MB/s
-------------------------------
Image #1 47023158 1.54 29.16
Image #2 13304886 0.66 19.23
Image #3 13304886 0.70 18.12
Image #4 19039286 0.84 21.51
Image #5 20431926 0.89 21.87
Image #6 36709430 1.29 27.21

The PCs are identical.
CPU: Pentium 4 3.00GHz
Memory: 2GB RAM
Disk: TOSHIBA MK4032GAX 40.0GB
NIC: Intel PRO/1000 PT dual port
OS: Fedora Core 5

How I can prove that the disks are the bottleneck? What other methods are there to verify that throughput has actually increased with bonding?

Also, I have encountered some links that say performance won't improve when you bond Gigabit Ethernet cards in Linux. Is that true?
 
Old 10-30-2007, 05:11 PM   #4
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,362

Rep: Reputation: 172Reputation: 172
Try:

[root@localhost ~]# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1186 MB in 2.00 seconds = 592.91 MB/sec
Timing buffered disk reads: 174 MB in 3.03 seconds = 57.42 MB/sec
[root@localhost ~]#


Where /dev/sda is whatever is appropriate for your drives.

If I remember correctly 100mbit is 12MB/s sec and GigE is 125MB/s so in my case GigE vastly outruns my hard drives. My older system uses Raid0 on WD 160gb drives (they where the "hot" drive when I bought them) and it is about 110MB/s. I would really be surprised if bonding GigE did not improve link speed IF one has the hard drive speed to back it up. As a guess I would say that one would need at least 200MB/s drives to see this advantage.

Good Luck
Lazlow
 
Old 10-30-2007, 05:43 PM   #5
bcg121
LQ Newbie
 
Registered: Oct 2007
Location: Pennsylvania
Posts: 19

Original Poster
Rep: Reputation: 0
Thanks again, Lazlow.

hdparm -tT /dev/hda yielded 31.37 MB/s. This makes sense. I could get very close to this number but never meet or exceed it.

netperf/netserver yielded 117 MB/s without bonding (theoretical max is 1 Gbps or 128 MB/s), and it yielded 205 MB/s with bonding (theoretical max is 2Gbps or 256 MB/s). So I am indeed getting greater network performance with bonding. Note that it was very important for me to change tcp_reordering from 3 to 127 for balance-rr.

I wish I could get scp to work without disk access. I tried:

time scp <filename> root@10.0.0.2:/dev/null/

but hit an "scp: /dev/null/: Is a directory" error. Then, I tried:

time scp <filename> root@10.0.0.2:/dev/null

but hit an "scp: /dev/null: truncate: Invalid argument" error. scp does give me a MB/s measurement here, but it is similar to what I was seeing before.

Is there a way to scp to /dev/null? Or is it that the syntax of my second attempt is valid and that it is actually the read from the source computer's hard drive that is the bottleneck? Am I correct to assume that the file will be read into the source computer's memory cache after the first copy attempt, eliminating the disk latency for subsequent copies?
 
Old 10-30-2007, 06:30 PM   #6
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,362

Rep: Reputation: 172Reputation: 172
No, I do not know of a way. But what is the point? You know that moving any significant amount of data you do not have the HD speed to handle it and you know(from netperf) that for smaller data sets you have the speed. The only time that I can think of that greater speed (than GigE) would be any use, without the disk speed to back it, would be Vnc.
 
Old 10-27-2008, 04:56 AM   #7
lxsure
LQ Newbie
 
Registered: Oct 2008
Posts: 1

Rep: Reputation: 0
What's the point???

Quote:
Originally Posted by bcg121 View Post
Thanks again, Lazlow.

hdparm -tT /dev/hda yielded 31.37 MB/s. This makes sense. I could get very close to this number but never meet or exceed it.

netperf/netserver yielded 117 MB/s without bonding (theoretical max is 1 Gbps or 128 MB/s), and it yielded 205 MB/s with bonding (theoretical max is 2Gbps or 256 MB/s). So I am indeed getting greater network performance with bonding. Note that it was very important for me to change tcp_reordering from 3 to 127 for balance-rr.

I wish I could get scp to work without disk access. I tried:

time scp <filename> root@10.0.0.2:/dev/null/

but hit an "scp: /dev/null/: Is a directory" error. Then, I tried:

time scp <filename> root@10.0.0.2:/dev/null

but hit an "scp: /dev/null: truncate: Invalid argument" error. scp does give me a MB/s measurement here, but it is similar to what I was seeing before.

Is there a way to scp to /dev/null? Or is it that the syntax of my second attempt is valid and that it is actually the read from the source computer's hard drive that is the bottleneck? Am I correct to assume that the file will be read into the source computer's memory cache after the first copy attempt, eliminating the disk latency for subsequent copies?

hi guys...i meet the same problem now..
2 pcs connect directly and each bonding 2 GigE together, when test with iperf/netperf, the speed is even lower than a single card, only about 600mb/s.
Try to modify the tcp_reordering to 127 but still the same problem.
bcg121, seems you had successfully increase the speed by bonding..can u tell me what's the point and how u make it works?

thanks in advance.
 
Old 10-28-2008, 07:56 AM   #8
bcg121
LQ Newbie
 
Registered: Oct 2007
Location: Pennsylvania
Posts: 19

Original Poster
Rep: Reputation: 0
I ended up writing two socket applications in C, one to send a file (after reading the entire file contents into memory) and one to receive the file. I did all the timing in the receive application. This eliminated any overhead that I was encountering by using standard Linux file transfer utilities like NFS and SCP.

Without bonding, I averaged 101MB/s. With bonding, I averaged 205MB/s. The results were very consistent.
 
Old 08-14-2010, 08:24 AM   #9
Corsari
Member
 
Registered: Oct 2004
Posts: 54

Rep: Reputation: 15
Question Bond setup with redundancy as primary target

Dear LQ friends

can you kindly suggest me the best choice for bonding two gigabit nics, but mainly for redundancy, next, if any of the modes will fit redundancy + performances... welcome.

Additionally a little of explanation on how to use netperf and the recommended setting to apply to tcp_reordering for such bond configuration, would be welcome as well.

Please note that my OS is XenServer by Citrix that in the underground appears to be CentOS (at least at the repositories level)

Thank you for any tip.

Robert

.

Last edited by Corsari; 08-14-2010 at 08:29 AM. Reason: typying error
 
Old 08-14-2010, 10:44 AM   #10
cjcox
Member
 
Registered: Jun 2004
Posts: 305

Rep: Reputation: 42
A gigabit nic is a gigabit nic. Multiple gigabit nic tests REALLY need multiple hitters. If you want to test your bonded config effectively, you need to have multiple clients. Then you can get a better feel for the performance differences. You simply won't see it with just one client going to one server. There's no magic that's going to happen. No miracle.
 
  


Reply

Tags
bond, xenserver


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
testing iptables performance testing pavan.daemon Linux - Networking 2 09-28-2007 06:22 PM
Anyone manage to get network bonding mode 0 to work with Debian 'Testing'? Akhran Linux - Networking 0 10-26-2006 07:18 PM
Testing the kernel performance gangaraju Fedora 4 03-06-2006 02:17 AM
Samba Performance Testing (an Observation) soopafresh Linux - Networking 0 08-12-2004 04:01 PM
Testing NVIDIA card performance under Madrake 9.2 frtu Linux - Newbie 4 11-27-2003 05:09 PM


All times are GMT -5. The time now is 06:56 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration