SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I've had a problem with Slackware #14 since I installed it. It runs like it's stuck in molasses. I've seen that Slackware installs a "huge" kernel, and that there's a way to cut the kernel down, but I haven't come across any kind of a guide to do that. And it sounds like a Gentoo kinda move, and I can't imagine doing it without recompiling and generating a new image for the boot loader to have to find. How hard is it, and is that going to help speed up what I've got now?
Here's lspci, if that gives any information. Looks like I've got a lot of Nvidia and AMD to work with.
'man mkinitrd' will explain the options. The explanations are reasonably straightforward.
Of course, use only the options you need and modify that example command for your system.
If you use lilo, then update /etc/lilo.conf and run the lilo command before rebooting. But don't worry, you can use the CD/DVD as a boot disk to get into your system if you unknowingly mangle lilo.conf.
If you use grub legacy, then edit /boot/grub/menu.lst as necessary.
With either lilo or grub, don't remove the original boot commands. Instead add a new section for the generic kernel and initrd. That way, in addition to the CD/DVD, you also can boot into the system with the original huge kernel boot option.
First, you run as root /usr/share/mkinitrd/mkinitrd_command_generator.sh
It will show you what command you need to enter in order to generate the proper initrd
Copy and paste it again in the console, it will be something like this:
Anyway, just in case, check out what you have in /boot before rebooting. You're looking for a vmlinuz-generic-(smp)-(version) kernel. If you see only vmlinuz-huge- kernel, get the kernel-generic-smp package from the repositories.
Last edited by NorthBridge; 10-25-2012 at 07:32 PM.
Could anyone address the OPs question "if that is going to help speed up what he's got now"?
I mean I know what ANNOUNCE and CHANGES say (including somewhere it reads it's OK to run what you installed ;p) and everyone knows kernels that load about everything plus the kitchen sink cost boot time and memory but apart from that (and it's not like the OP actually offered any data wrt real or perceived bottlenecks) I don't see it quantified anywhere what performance gain can be had. Or should I take it is just SOP to say running "huge" has a huge impact on performance w/o needing baseline SAR or other diagnostic output analysis? If so, with all due respect, doesn't that sound just a wee bit unscientific?
I just ran the kernel command and was able to reboot after giving the grub update command from the Linux Mint I have on another hard drive, and Slackware booted right up, and all seemed well.
Then when I went to post this from Google Chrome, Chrome froze up big time when I opened more than one tab to check some things. The mouse and keyboard would still work, and that's all that kept me from going for the shut down button on the front of the computer, so I still have issues.
Not sure what to make of it. I installed Slackware because I was having Google Chrome freeze up so badly on the Ubuntu install that I originally had on the other hard drive so badly that the mouse and keyboard were totally inoperative, and a hard shut down was the only way to get anything to work. Then it would freeze again after about 15 minutes of use. Never found out if it was Google, Ubuntu, my computer, or what. There wasn't any way to get a log to see what happened after the reboot, as far as I could tell. So now I've got this. I wanted to see how Slackware wireless worked with Network Manager, and that seems to work fine. But whatever makes this thing slow to a crawl is a pain in the nether regions.
Maybe something with the newer kernels, who knows? I also tried Fedora, although not on this computer. Never froze, but wireless was worthless, couldn't connect to my router which is in the next room.
Maybe something with the newer kernels, who knows?
It probably has nothing to do with the kernel and everything to do with X / Desktop Environment / Video drivers. Turn off composite and all shiny effects in KDE. Or use XFCE or some other window manager and see if the problem persists. Try using the NVIDIA binary blob (but make sure to blacklist nouveau if you do).
Just for the explanation, it seems that you have only 512MB RAM in that machine, minus a part for the videocard, which seems to be onboard.
For the heavy desktop environments, like KDE 4 or Ubuntu's Unity this is simply not enough, especially when you run a web-browser, which nowadays also are resource hogs.
I would recommend to put some more RAM into that machine, if possible, or, as you already have done, switch to a more lightweight environment and maybe also more lightweight software.
Well that's a bit of a stretch. I mean my personal preference is a minimum of 6GB and my new machine has 32 (and I managed to use 31 of them!), but I have a machine that has an old Celeron and 1 GB of memory and it works. Just for browsing etc it is just fine and I used it at 512...BUT and this relates to the OP's top output... I use a swap space on that machine. Running a machine with 512mb of memory with no swap space isn't going to work for squat.
So to the OP... if you have any room left on that machine to add a swap space you should do so, and of course more memory would help. I guess I should also mention that while that Celeron machine will run kde 4.x and Gnome 2.3.x, I generally use fluxbox on it. Once the features of KDE are gone, I don't see the point of XFCE. Fluxbox is just an awesome little window manager and on a system that can't have too many open windows to begin with, the right click anywhere thing is super sweet in my opinion. I think I drifted off topic here... my apologies.
Despite the fact that swapping RAM to/from disk does work, I think that any system doing that is pretty much poked. The history of virtual memory goes back to the days when memory was VERY scarce and VERY expensive, while disk was (relatively) cheap and fast. What's happened over time (> 30 years) is that memory has become incredibly fast and cheap. On the other hand, while disk size has increased by the same ratio (that's a guess), disk access times and speeds have NOT. Thus 30 years ago we could swap to disk with a penalty of say 100 times slower (a guess), today that same swap would be 10 000 000 times slower. Point is that if you have to swap in and out continually just to run, you're not going to have fun. (unless you like to watch paint dry).