Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I deal with Red Hat EL 3&4 Linux.
Does anyone have any good ideas on tuning my servers for optimum performance?
eg1: How to resolve High I/O issues, how do I identify the culprits?
eg2: CPU Bound performance issues - how to diagnose them?
eg3: How to resolve Memory bound performance issues?
eg4: How to resolve Network bound performance issues?
sysstat, iostat, vmstat, top, hdparm, sdparm, tune2fs, readprofile, oprofile... are all important tools for monitoring and tuning systems.
Theres a commercial host monitoring product that graphs alot of this stuff for you with 1HZ sample rates @ muldex.net. I prefer something like this since I don't have to login and run commandline-based inefficient stats collectors while the server is in production and already stressed... I also tend to not be watching programs like iostat or vmstat when performance issues happen, rather someone complains via email the next day.
Archiving the statistics continuously and looking at graphs after the fact is alot more flexible. There are free tools out there that do similar things (nagios and zabbix come to mind) but I havent seen anything as efficient while providing a high sample rate, and it takes just a few minutes to get setup.
It can monitor many of the different types of data that the listed tools can do and do it in one tool. It's lightweight (<.1% cpu load) and so you can keep it running all the time and if you're not watching the terminal when a problem occurs, no problem - you can play the data back for the time frame you're interested from the logs it collects. And finally, if you're into graphs, collectl can generate output in a form easily plotted by gnuplot or loadable into a spreadsheet that supports plotting.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.