Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The 2.6 kernel is said to be faster than 2.4 on a modern computer, trying on my Athlon 1,4GHz makes me believe it's true.
But what about old computers?
I have 2 computers I need to speed up, will try to compile a new kernel for them:
1.) P166 128-256 RAM Network server, no GUI. NFS, FTP & Samba only.
2.) AMD K6/2 333 384 RAM Home workstation
(I haven't decided what distro I'll put on them yet.)
Will a 2.6 kernel be better than 2.4 on these machines?
Of course I could just try, but compiling a new kernel does take some time. I would like to do that only once / computer.
The 2.6 kernel has a more efficient schedular than 2.4, so in theory it will run faster assuming that you're running more than two or three processes (most systems run at least 20–30).
That said, you'll also need to ensure that you compile in support for DMA etc. or you could cause a slow-down (as you could for a 2.4 kernel as well).
2.6's very nice for databased aplications. imho it's more resposible, but i switched back to 2.4.27 a few day's ago - i had some wired (possibly) scheduler related troubles and really can't make much differnce now - i actually think it's little faster now.
i didn't try 2.6 on my old servers (p 133/pII 300) and since i don't need the new features there i won't switch that soon and they are perfectly tuned for their work.
if they will have to do lot work i'll suggest using the preempt (which is allready included with the 2.6 source) and/or the lowlatency-patch with the 2.4.26 kernel (afaik it's not out for .27 yet).
if you want even more speed i'd compile with the -o3 -ffast-math -funroll-loops -fforce-mem flags (they didn't worked with gcc 2.9*, but give a fassst&stable system for me with the hardware i'm using).
<i>The 2.6 kernel has a more efficient schedular than 2.4, so in theory it will run faster assuming that you're running more than two or three processes (most systems run at least 20–30).</i>
I really don't think this is a good enough reason to update. Its debatable whether it is faster. People should read the changelogs and see whats actually usefull before deciding to update. That is NOT useful.
Originally posted by mritch
if you want even more speed i'd compile with the -o3 -ffast-math -funroll-loops -fforce-mem flags (they didn't worked with gcc 2.9*, but give a fassst&stable system for me with the hardware i'm using).
The compiler options (especially the processor version, for later processors) do make a bigger difference than the kernel version whatever you do. But I've had a lot of trouble compiling with -O3, in getting a kernel that just locked up on me occasionally. The culpret seemed to be the -ftracer option that -O3 implies. Using anything other than -O2 isn't “officially” supported for the kernel.
And...
all these compiler options, should I really use them without understanding what they're doing? I mean, is what you suggest safe for all or does it have to be tuned for different hardware?
Also, is it the same options for 2.4 and 2.6?
Paulinimus, you've got a point there.
But then, I will compile a new kernel. Question is: upgrade to 2.4.x or 2.6.x (since I naturally will be using an older distro)
No
you'll get a fast and reliably kernel without twidlin' with the makefile. standard is -o2 anyway.
if you don't use that same kernel on another box just make sure to use your cpu-type while you're configuring (make menuconfig or make xconfig,..) and compile in optimizations your machine supports. you'll get a little overview what your cpu can do when you type "cat /proc/cpuinfo".
Thanks..
Since I have tried to compile a new kernel before, 2 times succeeded but 4 times not working, does it matter which tool I use for configuration? config, menuconfig, xconfig... there's also a new one, I think? It's partly a matter of taste, of course, but is any one tool more reliable than the other?
The other tools you have are cloneconfig, which copies the configuration from the currently running kernel from /proc/config.gz, and gtkconfig, which has a gnome-style GTK front-end.
All any of these do is to write a file called /usr/src/linux/.config, which stores a number of settings in the form of #define statements.
I'd recommend using cloneconfig first to get the basic settings, and then using either xconfig or gtkconfig because they seem to be slightly easier to use in my opinion, but there's not very much in it.
But I wouldn't use “make config” unless you have to; it asks you a series of questions and doesn't give you a chance to go back and change the answers.
In terms of stability, etc. of the configuration tools, they should all be reliable. They all use the same hierarchy structure, so you shouldn't ever get a different set of options from one tool to another given the same settings.
You can also run “make help” in /usr/src/linux to get a list of the various “make” commands and what they do.
Now another question arises:
I am almost out of time with one computer, an old AMD K6/2-333 running VectorLinux 4.3 rc1. It's going to my sister tomorrow morning.
Question is, can I build a new kernel complete with all modules on a computer here at work, and just copy it over?
I think normally not, just copying in a module to the modules-dir doesn't make that module usable. But now I'll simply recompile the kernel already installed, to speed up performance a bit. It's a 2.6.7 (not sure about the last digit). Maybe I can just use the modules already in use?
i'm using debian and i'm able to build a package which i can simply install on a different machine (.rpm alike - but better ;-). maybe you have such a tool for your distibution?
you can however copy the modules and the kernelimage to the other machine. copy your image and everything from /lib/modules/[version of your kernel] to the other machine. mv the vmlinuz(kernelimage) into your /boot, mv the [version of your kernel]-directory to /lib/modules and use the vmlinuz in your bootloader. this should work too.
Alternatively, if you're using lilo, you can copy the kernel sources from one machine to the other and then run
Code:
make install
.
You'll want to copy the kernel headers (/usr/src/linux/include/) if you want to compile any software that depends on it in any case.
Also, an obvious point: the kernel will only work on the other computer if the processor is call-compatible with the processor selected during kernel configuration.
Reading that, and kernelsource's README "do make install if you have lilo"
What's lilo got to do with it?
VectorLinux use Grub, and thats the one I want. Can't I run 'make install' then? Not 'make modules_install either?
What do I do, just copy like mritch says? That puts them in place, but they won't be installed thus not usable?
I guess the modules can be found under the sourcecode-directory then, not in /lib/modules?
Yes, yes, I know... I shouldn't do this for a computer someone else is going to use, not when i really don't know how to do it...
But I want to do it! And I know one thing - how to make a backup so I can boot back to the old kernel "if" something goes wrong...
AAARGGGHHHH!!
Compiling just finished with ugly errors:
{standard input}: Assembler messages:
{standard input}:623: Error: value of ffffffffffffff75 too large for field of 1 bytes at 0000000000000636
make[3]: *** [drivers/char/rio/rioroute.o] Error 1
make[2]: *** [drivers/char/rio/] Error 2
make[1]: *** [drivers/char/] Error 2
make: *** [drivers] Error 2
Is it time to give up now?
But then, drivers/rio - I don't need that driver, maybe I just exclude it?
(I could do it all again, once more, though I might have to work a little too. Can't see why - could do that next week perhaps? It sure is a cruel world!)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.