Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
is it better to compile everything into your kernel, and have a giant kernel, or is it better to run your kernel as modules, with initrd images? i currently have everything i have / use compiled in, and it seems to be slower and larger than the huge smp kernel i started with. im seeking to extract every ounce of performance out of this machine, and am open to tips and suggestions.
i did trim everything out of the kernel that i dont use, but switched everything that i do use to compiled in.
I don' think it really matters. I don't think that attaching modules will give you a faster or slower kernel. The cpu, memory, and system bus are what gives you speed, the kernel, etc. are just a bunch of 1's and 0's. On old (really old) computers software things mattered more; today's computers are so fast that software issues such as optimization, etc. go mostly unnoticed.
Disabling the module support in your kernel however has other implications (for example, the fact that you are going to be invulnerable to rootkits that comes as kernel modules). But in modern desktops that's almost impossible, because you need it to install drivers like those for graphics cards, which most times have a closed source nature.
It does matter in startup speed. Loading a larger kernel can in some cases mean a big difference in boot speed.
My theory is compile most things as modules. This is not only because it will be faster to boot, but also because it will cause less potential problems derived from driver conflicts. Still I would make sure to make the filesystem drivers built-in so you don't have to use an initrd. Most of the rest can all be modules.
i find myself agreeing with H_TeXMeX_H my coustom kernels i was compiling in the sound and other stuff. my hand compiled kernels would be larger than huge smp. almost 5 meg! i would like the thoughts of a few others on this though.
i did trim everything out of the kernel that i dont use, but switched everything that i do use to compiled in.
Not sure if this is "the best way", but I prefer to take official slackware kernel configuration, then enable few additional things I need (optimize for cpu, larger timer frequency, etc).
Not sure if this is "the best way", but I prefer to take official slackware kernel configuration, then enable few additional things I need (optimize for cpu, larger timer frequency, etc).
Yeah, I agree, I use the generic kernel config, then tweak it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.