Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi!
My question is very simple. How to increase load average Linux with programmer tools.
We need to check how increasing in load average cause for package drop on our system.
Thanks for quick answers.
Staas.
I could be way off base here, but this is what I'd try ...
Take a big file (some video?) and transfer it from one PC to another (from a desktop to a server). Time how long it takes to do the transfer, just as a starting benchmark. Check for errors during the transfer as a starting benchmark, also.
Then, set up a batch file or script to auto transfer it, and have it execute in a loop - auto-overwriting the destination each time. That should provide the workload. If you can only transfer one file at a time, originating the batch/script from the server, then run it from several desktops (using different files, of course, so they're not stepping on each other). This should be scalable enough to get the kind of workload you want/need. I'd think you could have every desktop moving a big file to the server at the same time.
Then, when you know how well it works in one direction, reverse the movement at each desktop so that they are copying files from the server simultaneously. For even more variety, have half the desktops copying a file FROM the server, and the other half copying files TO the server. This should provide enough opportunities for collisions that you should be able to make an informed decision.
Monitoring the error counts is up to you - there's probably some utility to do that, if you don't have something built-in.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.