Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We are setting up an MPLS network which includes a QOS profile. Before we make this active, I'd like to stress test it and see how it handles under different loads.
The QOS is port and subnet based, so I would like to saturate the pipe with file transfers and monitor speeds according the QOS profile.
IE i have the following QOS layout:
priority 1: traffic from 172.16.16.0/21
priority 2: traffic from 172.20.16.0/21 :80,443
priority 3: everything else
It's a little more complicated then that, but I digress.
I have a machine at one end with a nic on each network (a 172.20.16 and a 172.16.16) and for each QOS class want to transfer as much traffic as I can to a server at the other end of the pipe.
Are there any packages already built to do this sort of thing? I thought about setting up an ftp server on the receive end that listens on a bunch of ports and just transferring that way...is that a valid approach?
I know this may sound a bit agricultural, but have found it to produce quantitive and repeatable results.
The good old 'ping' command.
Use 'ping' and use the 'packetsize' size parameter.
Start with a basic ping, then increase the 'packetsize' until the returned trip time is some where between 10 to 20 milliseconds. Normally a 'packetsize' of 4096 bytes does the trick on local networks.
It is not too difficult to put together a short shell script that records the date/time and the result into a log file too.
chort,
We're still looking for a description & man page for tcpbench. It does your readers no good when you casually mention s/w w/o providing a path to further info. As I said before, it's not in Wikipedia or die.net; it's also not found at freshmeat or SourceForge. Google Linux isn't even very helpful -- the 1st 2 hits are to kerneltrap.org & the 3rd is this thread.
You list OpenBSD as your 1st distro, is that where it's from?
The combination of Iperf and Jperf, a Java frontend, is excellent for UDP and TCP testing, will draw you graphs and everything. I used it recently to prove throughput of my employers MPLS/IPVPN. you can run this combination on both Win and Linux, works a treat.
Ping is NOT a good tool for thoughput testing as it is deliberately throttled more and more now by various routers and firewalls as it is normally indicative of DOS attacks.
... Ping is NOT a good tool for thoughput testing as it is deliberately throttled more and more now by various routers and firewalls as it is normally indicative of DOS attacks.
Even w/ the "'packetsize' size parameter" technique agbmelbaust mentioned?
Even w/ the "'packetsize' size parameter" technique agbmelbaust mentioned?
Yep, even with that. The throughput a network grants ICMP, which is supposed to be a network signaling protocol, is no indication of how it will handle TCP, which the vast majority of bulk transfer applications use as a transport layer.
iperf uses tcp as its test data stream and allows you to tweak windows size and buffer size so you can find the optimal settings for your network environment.
Use 'ping' and use the 'packetsize' size parameter.
That's exactly what I did recently to measure performance problems before and after rearranging a LAN with mixed 100Mb and 1Gb connections between obsolete switches.
That was after failing some google searches trying to find tools like the ones that seem to be described in this thread. When I have time, I hope to review those tools and see if I could have done the measurements more accurately.
Quote:
Start with a basic ping, then increase the 'packetsize' until the returned trip time is some where between 10 to 20 milliseconds. Normally a 'packetsize' of 4096 bytes does the trick on local networks.
Was that during normal load? It would be hard to get reproducible results with ping during normal load. I tested while the LAN had a very light load or a very controlled artificial load.
Maybe normal load is how you got times of 10 to 20 milliseconds for just 4096 bytes. I didn't have results anything like that. Even for 100Mb paths 4096 bytes was too small for consistent results or for times as large as 10 ms. For 1Gb paths any packet size smaller than the max (65500) seemed too small for meaningful results.
I got almost no round trips larger than 16 ms. Various 100Mb paths had various limits on packet size (somewhere in 20000 to 65500) such that packets wouldn't pass at all above the path specific limit. But packets small enough to pass at all almost always had round trip under 16 ms if at all. On the worst paths, increasing the packet size within a moderate range below that path's max size seemed to just decrease the probability it would get through at all rather than make it take more than 16 ms round trip.
I ended up comparing 100Mb paths by what size packet took a 15 ms round trip, rather than by how long a specific size took.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.