How does loadable modules affect the overall performance of the linux system?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How does loadable modules affect the overall performance of the linux system?
Hi there,
I read in couple of books and HOWTO's that modules loaded into the linux kernel (not compiled) come with a slight performance and memory usage penalty. So, my question is: Is this "penalty" gone if you compile the modules into the kernel, so it doesn't have to load it every time they are needed (except the larger size of the kernel)?
Thanks.
P.P. I know it's a luser-question but I couldn't find a comparison between performance with and without loaded modules.
hi,
ya its quite right that modules are loaded at the cost of memory and system performance.
Actually what happens, when a particular module is not found in memory.....its brought into memory, hence some other things(process may be) have to be swapped out of the main memory in order to bring this module into memory.
Hence some CPU cycles are used and since some other things are swapped out so they too would be swapped in some later time. Hence its affecting the performance.
Second thing. Why don't we compile all the modules ???
I don't have the clear idea. But as far as i know, by compiling all the modules into kernel, the size of kernel will increase. This will decrease the stability of the kernel as well as the speed of the system will degrade.
Smaller the size of kernel.......faster will be the system.
I don't have the clear idea. But as far as i know, by compiling all the modules into kernel, the size of kernel will increase. This will decrease the stability of the kernel as well as the speed of the system will degrade.
Smaller the size of kernel.......faster will be the system.
Not sure if I agree with your points here. Sure the size of the kernel will increase with builtin support, but consider that the kernel hook code that must be used to bind a module to the running kernel also adds some size. As for stability, I do not think static kernels are less stable at all, in fact, logic would dictate that a kernel with removable parts would be less stable. That said, I don't think there is really a difference in stability at all.
And also, a smaller kernel does not directly translate to faster kernel, rather, one that just takes less space in memory.
My guideline for building kernels: anything you need all the time, such as network drivers, sound drivers etc should be static. Drivers/modules you only need some of the time, such as loopback devices, ramdisks, and transient filesystems are good canidates for modules.
And also, a smaller kernel does not directly translate to faster kernel, rather, one that just takes less space in memory.
what i know and experienced is that after compiling the kernel (removing the drivers support which i don't require) it boots up faster. But i'm not sure that after booting and getting into X, will there be any increase in the speed of system.
Plz. explain it a bit.
and my linux guru once told me that compiling all the modules into kernel will increase the size of kernel and will make it unstable. Put some light on this concept also.
what i know and experienced is that after compiling the kernel (removing the drivers support which i don't require) it boots up faster. But i'm not sure that after booting and getting into X, will there be any increase in the speed of system.
Plz. explain it a bit.
...exactly my point.
Quote:
and my linux guru once told me that compiling all the modules into kernel will increase the size of kernel and will make it unstable. Put some light on this concept also.
Well, I think we need to make the distinction between every possible module and every module you actually need for your hardware.
Of course if you add _everything_ your kernel will be absurdly huge. However, I strongly disagree that adding all your required drivers statically introduces any more instability than having them as modules, and I rather think it is the job of your Linux Guru to justify his own contention, not me.
So, I suppose in case you are dealing with an embedded system, it is better to have all things statically build into the kernel (after all you know exactly what will be needed), because loaded modules will eat up precious memory
because loaded modules will eat up precious memory
Well, I think the difference is quite negligible, probably on the order of 1k or less. Perhaps in an embedded device it may matter, but I do not know enough about such devices...
Bottom line: in a desktop/server machine, if you can tell any significant difference (as in noticeable effects on your system) in speed or size between the same driver static or modularized, you are probably deluding yourself.
When you get an ordinary "distro," it probably has drivers for just about anything and everything that could be on a computer. (DecSystem token-ring cards, anyone?) And it might troll through them; might even load them, leaving them to unload themselves or time-out. A lot of wasted time.
There's also a "hardware check on reboot" that wastes time.
So, you can get rid of the modules you don't need, strip them out of the load-list, and your computer will start up noticeably faster. Turn off the hardware-check too.
Once you have your system "un-gunked" so that it's not carrying around all that weight, the module-load time becomes insignificant.
And also, a smaller kernel does not directly translate to faster kernel, rather, one that just takes less space in memory.
i'm not even really sure this is true.
Linux only loads needeed code into memory not whole thing.
the reason in my opinion modules take a little and very little extra resources is the code that keeps track of module useage and checks if a module is still needed.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.