ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Slackware (mainly) and then a lot of others...
Posts: 855
Rep:
Configure cpu interrupts manually
I was installing freebds for one of my friends and during the entire install the top never crossed 1.0. When I install linux top always shows a cpu load of more than 1.0.
Also on the bsd when copying a huge file (4gb) it takes a long time but then the load never goes beyond 1.0. Linux will do it much faster but then there is a difference in the cpu load time.
What I want to know is is there anyway I can interupt the processor so that the load does not go beyond 1.0.
Thanks for reading.
Under linux, load != cpu utilization. Under freebsd, it may.
The relationship between the two is indirect. The load average on linux only measures the task runqueue length. See http://www.teamquest.com/resources/g...ay/5/index.htm for more information.
Clear up the situation, a bit? High uptime load averages, under linux, aren't necessarily a bad thing.
Distribution: Slackware (mainly) and then a lot of others...
Posts: 855
Original Poster
Rep:
Sorry if I was not very clear in the first post. In bsd I saw that the interrupts were very often and it took the load level down.
Is it possible for me to configure the interupt the processsor when say it is at .80 so that it would not cross .90 atleast. Since linux is so flexible I was wondering if I ever wanted to do that would I be able to.
Sorry if I was not very clear in the first post. In bsd I saw that the interrupts were very often and it took the load level down.
Is it possible for me to configure the interupt the processsor when say it is at .80 so that it would not cross .90 atleast. Since linux is so flexible I was wondering if I ever wanted to do that would I be able to.
Distribution: Slackware (mainly) and then a lot of others...
Posts: 855
Original Poster
Rep:
Well I had thought about nice but then it cannot be changed at runtime - unless you are the root. So that is something I would not do.
Just hope I do not have to compile a kernel to get this functionality .
Distribution: Slackware (mainly) and then a lot of others...
Posts: 855
Original Poster
Rep:
Well I had thought about nice but then it cannot be changed at runtime - unless you are the root. So that is something I would not do.
Just hope I do not have to compile a kernel to get this functionality .
Well I had thought about nice but then it cannot be changed at runtime - unless you are the root. So that is something I would not do.
Just hope I do not have to compile a kernel to get this functionality .
I do not understand what is the problem with load greater than 1 in the first place.
So, you're not prepared to simply use su{do} to adjust a nice value, but you're prepared to hack to kernel to try and achieve this stupidity. You don't understand what you're seeing, but demand an answer to fit your mis-perception.
Distribution: Slackware (mainly) and then a lot of others...
Posts: 855
Original Poster
Rep:
@syg00 - No I am _not_ ready to hack the kernel to get this functionality. Using 'nice' is ok but to be honest I have rarely used it. Using 'nice' to achive the result is fine but then do you realise that I have to use nice _every_ time I am doing something computational intensive.
What I am looking for is somehow a more permanent fix where the load on the cpu does not cross 1.0.
This is not only about my system - I also want learn something more about the way the way I can control the processes eating away at the cpu.
See post #7.
Loadavg is not "load on the cpu". The link above is relevant for Unix (including presumably freebsd) - not Linux.
Go read this. It's not completely accurate these days, but will give you a better idea of what loadavg is. Setting a "watermark" of 1.0 is meaningless - especially on multi-core/hiper-threaded chips, but even on single CPU.
@syg00 - No I am _not_ ready to hack the kernel to get this functionality. Using 'nice' is ok but to be honest I have rarely used it. Using 'nice' to achive the result is fine but then do you realise that I have to use nice _every_ time I am doing something computational intensive.
What I am looking for is somehow a more permanent fix where the load on the cpu does not cross 1.0.
This is not only about my system - I also want learn something more about the way the way I can control the processes eating away at the cpu.
There is nothing to fix. Furthermore, if you really want to increase the frequency of HW interrupts, you will be squandering CPU resources - every HW interrupt is an overhead because of saving/restoring registers and the need of kernel to check what to do next, i.e. to check whether current task needs to be suspended and which task needs to be resumed.
See post #7.
Loadavg is not "load on the cpu". The link above is relevant for Unix (including presumably freebsd) - not Linux.
Not sure if you're referring to my link, but the link I posted specifically outlines the fact that under linux load avg is NOT the same as task runqueue length. It uses Solaris as a comparison tool to show that under some systems, the two are directly related. Regardless, both above links were quite relevant, unless you can detail why they werent?
Additionally, the first line of the second post:
Quote:
Under linux, load != cpu utilization.
And the last line:
Quote:
High uptime load averages, under linux, aren't necessarily a bad thing.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.