Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I was using command "top", and tried to see how my php scripts perform when i was running the MS stress tool with stress tool. Those php scripts are just some simple scripts, which take inputs and store them into mysql.
but i got like around 40% total cpu usage, isn't it too much for just three php scripts?
And one other thing, is there any better command or tool that i can use to monitor the system resources, instead of command "top"
coz, i had to manually recorded the values from "top" command", and then average up those values. But those values vary a lot, so the result isn't quite revelent. many thanks
The CPU usage for each thread is controlled by part of the kernel called the schedular. The schedular's job is mainly to make sure the CPU is running code as much as possible (rather than “context-switching” between processes or waiting for hardware).
So 40% is actually quite low unless you're running other software at the same time. You can change the distribution between processes by using the “nice” and “renice” commands, that help the schedular decide which processes to run if it has a choice (effectively throttling back your PHP scripts to let something else run instead).
To analyse the performance of a single program, I would normally use gprof, but I'm not too sure if that works with scripts.
The CPU is split out unevenly between all running processes. At any single instant of time, only one process is running on the CPU, and it switches between which one is running about once every 0.00005 seconds (very roughly). A process won't run only if it has nothing to do.
So, if you only had two processes running, and they both did something simple (like adding two numbers together repeatedly) they would both get a CPU time of around 50%. If they were both doing something really slow and complicated (like computing PI to the two-millionth decimal place), they would still both get 50% of the CPU time.
The only case where a process will get a lower percentage of CPU time is if it's doing lots of writing to or reading from hardware, or lots of sleeping (being idle) — then, the schedular tries to run another process while it's waiting to stop the CPU from being idle. Thus, one process gets less CPU time while another process gets mode. Only if all processes are waiting will the total CPU load actually go below 100%.
The fact that your process is achieving 40% CPU load is more of a tribute to how well the database is caching your data than anything else (you're not waiting for the database very much).
If you really want to, you can reduce the CPU load by using something like usleep(1) every so often in your code. All this will do is have the script pause for a while (0.000001 seconds) and thus get another process to run during that time. It won't make anything more efficient at all, but it'll recude the CPU load (if that really is what you're worried about).
Also, remember that scripts tend to have a much longer start-up/shutdown phase than other programs, so you'll be taking a CPU hit there.
The other thing that can cause the CPU to be idle is not having any code to run.
If every process in the system is, at a given point in time, either sleeping or waiting for I/O then the schedular doesn't have anything to run, and so the CPU will remain idle.
If there isn't any I/O then the CPU will be 100% used ideally, except for “context-switching”, i.e. the time taken to switch between different running processes. This generally accounts for anything up to around 1% of CPU time (roughly). The more context switching you do, the less the CPU is used but the more responsive individual processes are (because they get to run more often). You have some indirect control over this through the processes “nice” value.
In order to know what kind of hit your system will take if you get lots of voters, you will need to actually stress-test it and kick off several dozen/hundred/thousand PHP processes at once, and time how long they take. You can't just assume how well it will scale up, because there are a lot of tricks that the software employs to make these things run efficiently with lots of processes that aren't necessarily efficient with just a few. Also, if you're using a decent database like MySQL or PostgreSQL, then as you use them they'll start to learn what kind of queries you tend to make on them and start to optimise themselves for those queries.
But one more question, my main question is how to test how well the unix machine is performing comparing to MS window.
For unix, i'm using php, and mysql
for window, i'm using asp and sql
And i want to compare their performance (cpu usage%, memory...) Any idea of what i can do? THanks a lot. Here is my aim screen name, manofwax21, if you don't mind, i'd like to discuss with you. thanks A LOT =)