LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   network benchmarking (https://www.linuxquestions.org/questions/linux-networking-3/network-benchmarking-680086/)

vortmax 10-30-2008 02:45 PM

network benchmarking
 
We are setting up an MPLS network which includes a QOS profile. Before we make this active, I'd like to stress test it and see how it handles under different loads.

The QOS is port and subnet based, so I would like to saturate the pipe with file transfers and monitor speeds according the QOS profile.

IE i have the following QOS layout:

priority 1: traffic from 172.16.16.0/21
priority 2: traffic from 172.20.16.0/21 :80,443
priority 3: everything else

It's a little more complicated then that, but I digress.

I have a machine at one end with a nic on each network (a 172.20.16 and a 172.16.16) and for each QOS class want to transfer as much traffic as I can to a server at the other end of the pipe.

Are there any packages already built to do this sort of thing? I thought about setting up an ftp server on the receive end that listens on a bunch of ports and just transferring that way...is that a valid approach?

chort 10-31-2008 01:25 AM

iperf, tcpbench, ttcp...

archtoad6 10-31-2008 05:58 AM

http://en.wikipedia.org/wiki/Measuri...ork_throughput

http://en.wikipedia.org/wiki/Iperf
tcpbench: no Wikipedia article, no die.net man page -- a link for this one would be really nice.
http://en.wikipedia.org/wiki/Ttcp

Also mentioned in the Wikipedia throughput article:
http://en.wikipedia.org/wiki/Bwping

Available in the MEPIS 6.0 (Ubuntu 6.06 Dapper repo's):
  • iperf
  • nttcp
  • netpipes
  • netpipe- family

More & better links would be nice, too.

agbmelbaust 11-02-2008 01:40 AM

I know this may sound a bit agricultural, but have found it to produce quantitive and repeatable results.

The good old 'ping' command.

Use 'ping' and use the 'packetsize' size parameter.

Start with a basic ping, then increase the 'packetsize' until the returned trip time is some where between 10 to 20 milliseconds. Normally a 'packetsize' of 4096 bytes does the trick on local networks.

It is not too difficult to put together a short shell script that records the date/time and the result into a log file too.


AB

archtoad6 01-13-2009 11:50 AM

Do you have one you'd care to share?


chort,
We're still looking for a description & man page for tcpbench. It does your readers no good when you casually mention s/w w/o providing a path to further info. As I said before, it's not in Wikipedia or die.net; it's also not found at freshmeat or SourceForge. Google Linux isn't even very helpful -- the 1st 2 hits are to kerneltrap.org & the 3rd is this thread.

You list OpenBSD as your 1st distro, is that where it's from?

baldy3105 01-24-2009 11:10 AM

Hi,

The combination of Iperf and Jperf, a Java frontend, is excellent for UDP and TCP testing, will draw you graphs and everything. I used it recently to prove throughput of my employers MPLS/IPVPN. you can run this combination on both Win and Linux, works a treat.

Ping is NOT a good tool for thoughput testing as it is deliberately throttled more and more now by various routers and firewalls as it is normally indicative of DOS attacks.

archtoad6 01-25-2009 09:33 AM

Quote:

Originally Posted by baldy3105 (Post 3419837)
... Ping is NOT a good tool for thoughput testing as it is deliberately throttled more and more now by various routers and firewalls as it is normally indicative of DOS attacks.

Even w/ the "'packetsize' size parameter" technique agbmelbaust mentioned?

catworld 01-25-2009 10:23 AM

Plus:

iftop
ntop
jnettop
iptraf
nload
nta
tcpstat
tcptrack
tshark
ipaudit
...

Quite a few, pick the one that seems most logical to you.

baldy3105 01-31-2009 05:06 PM

Quote:

Originally Posted by archtoad6 (Post 3420656)
Even w/ the "'packetsize' size parameter" technique agbmelbaust mentioned?

Yep, even with that. The throughput a network grants ICMP, which is supposed to be a network signaling protocol, is no indication of how it will handle TCP, which the vast majority of bulk transfer applications use as a transport layer.

iperf uses tcp as its test data stream and allows you to tweak windows size and buffer size so you can find the optimal settings for your network environment.

chort 02-01-2009 03:49 PM

tcpbench is apparently unique to OpenBSD, which is too bad since it's far less buggy than iperf.

zerobane 02-03-2009 09:48 AM

i"ve always used netcat and netperf.

netcat is nc from cli...

man nc

netperf does a very good job at saturing the network... fairly easy to use...

nc is usaully built in...

johnsfine 02-03-2009 01:39 PM

Quote:

Originally Posted by agbmelbaust (Post 3328859)
The good old 'ping' command.

Use 'ping' and use the 'packetsize' size parameter.

That's exactly what I did recently to measure performance problems before and after rearranging a LAN with mixed 100Mb and 1Gb connections between obsolete switches.

That was after failing some google searches trying to find tools like the ones that seem to be described in this thread. When I have time, I hope to review those tools and see if I could have done the measurements more accurately.

Quote:

Start with a basic ping, then increase the 'packetsize' until the returned trip time is some where between 10 to 20 milliseconds. Normally a 'packetsize' of 4096 bytes does the trick on local networks.
Was that during normal load? It would be hard to get reproducible results with ping during normal load. I tested while the LAN had a very light load or a very controlled artificial load.

Maybe normal load is how you got times of 10 to 20 milliseconds for just 4096 bytes. I didn't have results anything like that. Even for 100Mb paths 4096 bytes was too small for consistent results or for times as large as 10 ms. For 1Gb paths any packet size smaller than the max (65500) seemed too small for meaningful results.

I got almost no round trips larger than 16 ms. Various 100Mb paths had various limits on packet size (somewhere in 20000 to 65500) such that packets wouldn't pass at all above the path specific limit. But packets small enough to pass at all almost always had round trip under 16 ms if at all. On the worst paths, increasing the packet size within a moderate range below that path's max size seemed to just decrease the probability it would get through at all rather than make it take more than 16 ms round trip.

I ended up comparing 100Mb paths by what size packet took a 15 ms round trip, rather than by how long a specific size took.


All times are GMT -5. The time now is 08:43 PM.