Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am not sure where to put this, but we have a webserver we are trying to stress test with apache benchmark on ubuntu 14.04 server edition. Our output traffic is only 900KBps, which is tiny and our webserver passes with flying colors. But part of our goal is to find out the breaking point of the server.
I know it's not our CPU or RAM because by monitoring htop and I can see that everything is fine. Via bmon I can see our output fluctuates around 900 ~1000 KBps.
Via ethtool I can see that I can support a speed of 1000Mbps or 125MBps. So I am not sure why this is happening.
What are you using for your benchmark? Is it perhaps not able to issue http requests quickly enough to fill your network connection? Is your client machine or process running the benchmark running into performance limits before your server does?
There's an apache-http benchmarking utility that comes with apache called "ab" (apache-blast). It can be told via commandline options to make connections concurrently, and thus stress the server more heavily than a benchmark that only makes one connection at a time.
It also gives you statistics on how well your server performed in handling http requests.
You might also want to try running it from both a remote machine and the server, to see if there are network effects limiting your connection throughput (which can be independent of available bandwidth).
our client machine is a dell r610 with a broadcom nic, we are using ab. we tried several concurency from 100 to 500 and it always peaks out at that. the client is not being limited, we monitored its cpu and mem, and we get the same result. the data rate is measured from the client side, i have a 1G fiber link going to the server so I know that's not a problem. Its just that our client can't seem to keep up.
What does top(1) or htop(1) show you on the dell r610 while ab is running?
Have you tried running ab on the apache server, instead of on the dell?
Editing to add: Concurrency above a certain point hits diminishing returns, and can slow down overall connection rate. Do you get better throughput with fewer concurrent connections? (10, 20, 40)
interesting, our lowest concurrent was 100, i will have to try that later. why should i try to run the ab on the server? test the throughput in reverse?
I tried it again just now at home. I setup two ubuntu server VMs but used a various concurrencies from 10 to 100, even at home I get at most 16Mbps. My iperf test shows I can go 5Gbps.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.