SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
All you have to do is edit one line of the makefile so it installs in /boot.
Second, you name your kernel in the makefile also.
Third, it does run lilo, and it also moves the old kernel to vmlinuz.old (as well as the other files), so all you would have to do is edit lilo to point another option at the .old kernel and presto, fallover incase it fscks up on you.
In other words... a few simple steps can save you time later on when you use make install.
So how is make install or cping the files going to fix the memory limit?
Each ones does it on his way, don't start this kind of discussion that won't ever end.
Originally posted by slackMeUp All you have to do is edit one line of the makefile so it installs in /boot.
Second, you name your kernel in the makefile also.
Right. And every time I download a new kernel, I have to edit the makefile again for two reasons: first, to change the kernel version. second, in case they change the file to account for new source files. It's more steps to edit the makefile than it is to copy the new files over once or twice, and while I dunno about you, I almost never need to compile a kernel more than once.
No. I'll stick with copying the files manually, thank you. And before you say what you're obviously thinking, the answer's going to stay "no", no matter what you say. Rolling your eyes and starting by saying things like "my god" is a good way to convince me that you're an asshole, not that you're right.
Got it in 2.... it sets the runlevel to 6, making the system reboot immediately after booting up.
Put that in place while trying to troubleshoot a problem I was having intermittently with my gaming machine, where it'd reboot randomly after playing games for several hours. Turned out to be a defunct northbridge heatsink.
Originally posted by killerbob Right. And every time I download a new kernel, I have to edit the makefile again for two reasons: first, to change the kernel version. second, in case they change the file to account for new source files. It's more steps to edit the makefile than it is to copy the new files over once or twice, and while I dunno about you, I almost never need to compile a kernel more than once.
Right, and you're saying that patching the kernel source when a new kernel comes out, which does not require you to re-edit your makefile, is too hard?
A patch is a smaller download...
(and)
Upon recompile, most of the files that were unchanged in the update don't need to be compiled again, thus a shorter compile time...
Originally posted by killerbob A larger cache or buffer does not necessarily translate into better performance, particularly when it means more swapping requirements.
This is not the nineties anymore, caches like the buffer cache are dynamically sized these days. If an application suddenly requires huge amounts of memory the buffer cache is automatically resized.
Quote:
Otherwise, better not to open that addressing space up in the first place, because it increases the memory requirements to keep track of and swap blocks in and out of active memory.
Most buffers just use a simple linked list with a pointer to data blocks, the memory requirement is negligable.
Originally posted by slackMeUp Right, and you're saying that patching the kernel source when a new kernel comes out, which does not require you to re-edit your makefile, is too hard?
A patch is a smaller download...
(and)
Upon recompile, most of the files that were unchanged in the update don't need to be compiled again, thus a shorter compile time...
Patches are only useful if you switch kernels every time a new one comes out. If you check the changelogs/fixes and only update when a kernel comes out fixing something that's actually affecting you, then you may find you're jumping from 2.6.7 to 2.6.11.8, and in that case, you can't patch directly. With a reasonably fast connection, I find it's faster to just download the new kernel than it is to download 14 patches and apply them in sequence. If I had a script that would download new patches and apply them in sequence, that would be different, but such a script is really more trouble than it's worth.
I have reasons for doing things the way I do, and more importantly, I've already told you that you've blown any chance you had at convincing me to try another way, so why are you still trying?
I think it's time to get back on track. I'm assuming that he's fixed his problem, unless this bickering hasn't convinced him to abandon the effort.
That's maturity on your part. I'm glad it's working. I remember the
first time I compiled a kernel with 1GB of memory...and then saw
in gkrellm that I had ~800MB.
Glad you're Slackin', mate. And thanks for posting back it works. ;-)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.