SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've had a problem with Slackware #14 since I installed it. It runs like it's stuck in molasses. I've seen that Slackware installs a "huge" kernel, and that there's a way to cut the kernel down, but I haven't come across any kind of a guide to do that. And it sounds like a Gentoo kinda move, and I can't imagine doing it without recompiling and generating a new image for the boot loader to have to find. How hard is it, and is that going to help speed up what I've got now?
Here's lspci, if that gives any information. Looks like I've got a lot of Nvidia and AMD to work with.
'man mkinitrd' will explain the options. The explanations are reasonably straightforward.
Of course, use only the options you need and modify that example command for your system.
If you use lilo, then update /etc/lilo.conf and run the lilo command before rebooting. But don't worry, you can use the CD/DVD as a boot disk to get into your system if you unknowingly mangle lilo.conf.
If you use grub legacy, then edit /boot/grub/menu.lst as necessary.
With either lilo or grub, don't remove the original boot commands. Instead add a new section for the generic kernel and initrd. That way, in addition to the CD/DVD, you also can boot into the system with the original huge kernel boot option.
First, you run as root /usr/share/mkinitrd/mkinitrd_command_generator.sh
It will show you what command you need to enter in order to generate the proper initrd
Copy and paste it again in the console, it will be something like this:
Anyway, just in case, check out what you have in /boot before rebooting. You're looking for a vmlinuz-generic-(smp)-(version) kernel. If you see only vmlinuz-huge- kernel, get the kernel-generic-smp package from the repositories.
Last edited by NorthBridge; 10-25-2012 at 06:32 PM.
Could anyone address the OPs question "if that is going to help speed up what he's got now"?
I mean I know what ANNOUNCE and CHANGES say (including somewhere it reads it's OK to run what you installed ;p) and everyone knows kernels that load about everything plus the kitchen sink cost boot time and memory but apart from that (and it's not like the OP actually offered any data wrt real or perceived bottlenecks) I don't see it quantified anywhere what performance gain can be had. Or should I take it is just SOP to say running "huge" has a huge impact on performance w/o needing baseline SAR or other diagnostic output analysis? If so, with all due respect, doesn't that sound just a wee bit unscientific?
unSpawn, it's a good question, but , well there's a sure-fire quick (I hope!) way to find out! The system is going down for a kernel rebuild in 5...4...3..2..
If it's still slow after switching to the generic kernel, then try running Fluxbox (not KDE) and see if that makes a difference. You can use the "xwmconfig" program to switch between DE's/WM's.
I just ran the kernel command and was able to reboot after giving the grub update command from the Linux Mint I have on another hard drive, and Slackware booted right up, and all seemed well.
Then when I went to post this from Google Chrome, Chrome froze up big time when I opened more than one tab to check some things. The mouse and keyboard would still work, and that's all that kept me from going for the shut down button on the front of the computer, so I still have issues.
Not sure what to make of it. I installed Slackware because I was having Google Chrome freeze up so badly on the Ubuntu install that I originally had on the other hard drive so badly that the mouse and keyboard were totally inoperative, and a hard shut down was the only way to get anything to work. Then it would freeze again after about 15 minutes of use. Never found out if it was Google, Ubuntu, my computer, or what. There wasn't any way to get a log to see what happened after the reboot, as far as I could tell. So now I've got this. I wanted to see how Slackware wireless worked with Network Manager, and that seems to work fine. But whatever makes this thing slow to a crawl is a pain in the nether regions.
Maybe something with the newer kernels, who knows? I also tried Fedora, although not on this computer. Never froze, but wireless was worthless, couldn't connect to my router which is in the next room.
Maybe something with the newer kernels, who knows?
It probably has nothing to do with the kernel and everything to do with X / Desktop Environment / Video drivers. Turn off composite and all shiny effects in KDE. Or use XFCE or some other window manager and see if the problem persists. Try using the NVIDIA binary blob (but make sure to blacklist nouveau if you do).
Just ran top to see what the output was, not sure what to make of the info. Changing to XFCE is pretty easy, if I remember, but how to deselect nouveau and go with NVIDIA?
Code:
top - 21:42:32 up 1:02, 3 users, load average: 2.11, 1.18, 1.14
Tasks: 152 total, 5 running, 147 sleeping, 0 stopped, 0 zombie
Cpu(s): 17.3%us, 5.2%sy, 0.7%ni, 51.8%id, 24.8%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 493820k total, 485496k used, 8324k free, 204k buffers
Swap: 0k total, 0k used, 0k free, 85628k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2231 larry 20 0 302m 41m 12m R 6.3 8.6 0:13.67 plasma-desktop
2205 larry 20 0 289m 29m 6392 R 3.6 6.0 0:30.26 kwin
2067 root 20 0 81760 21m 11m S 3.3 4.5 0:54.84 X
2502 larry 20 0 195m 33m 12m R 1.7 6.8 2:38.41 chrome
2171 root 20 0 24948 1584 940 S 1.3 0.3 0:01.32 udisks-daemon
2315 larry 20 0 383m 64m 9448 R 1.3 13.4 1:44.30 chrome
1901 messageb 20 0 3532 1220 416 S 0.3 0.2 0:01.01 dbus-daemon
2131 larry 20 0 153m 8884 2176 S 0.3 1.8 0:03.28 kded4
2247 larry 20 0 239m 19m 876 S 0.3 4.0 0:04.94 mysqld
2281 larry 39 19 172m 7988 760 S 0.3 1.6 0:02.26 nepomukservices
2554 larry 39 19 98520 4180 1276 S 0.3 0.8 0:00.56 nepomukservices
2681 root 20 0 0 0 0 S 0.3 0.0 0:03.04 kworker/0:0
2816 larry 20 0 2836 780 480 R 0.3 0.2 0:01.25 top
2821 root 20 0 0 0 0 S 0.3 0.0 0:00.01 kworker/0:1
2822 root 20 0 0 0 0 S 0.3 0.0 0:00.13 kworker/u:1
1 root 20 0 2008 64 0 S 0.0 0.0 0:01.04 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
Just made the switch to XFCE, and it's WAY faster, multiple tabs are no problem now. I'll play with this for a bit and if it works, mark this as solved. Thanks!
EDIT: Yeah, it's better by a mile Slackware just went from being the slowest operating system I'm using to perhaps the fastest, although I I have to still give a nod to Fedora on a Thinkpad I'm using.
But this is way better, and I like that the answer was relatively painless, too.
Just for the explanation, it seems that you have only 512MB RAM in that machine, minus a part for the videocard, which seems to be onboard.
For the heavy desktop environments, like KDE 4 or Ubuntu's Unity this is simply not enough, especially when you run a web-browser, which nowadays also are resource hogs.
I would recommend to put some more RAM into that machine, if possible, or, as you already have done, switch to a more lightweight environment and maybe also more lightweight software.
Well that's a bit of a stretch. I mean my personal preference is a minimum of 6GB and my new machine has 32 (and I managed to use 31 of them!), but I have a machine that has an old Celeron and 1 GB of memory and it works. Just for browsing etc it is just fine and I used it at 512...BUT and this relates to the OP's top output... I use a swap space on that machine. Running a machine with 512mb of memory with no swap space isn't going to work for squat.
So to the OP... if you have any room left on that machine to add a swap space you should do so, and of course more memory would help. I guess I should also mention that while that Celeron machine will run kde 4.x and Gnome 2.3.x, I generally use fluxbox on it. Once the features of KDE are gone, I don't see the point of XFCE. Fluxbox is just an awesome little window manager and on a system that can't have too many open windows to begin with, the right click anywhere thing is super sweet in my opinion. I think I drifted off topic here... my apologies.
Despite the fact that swapping RAM to/from disk does work, I think that any system doing that is pretty much poked. The history of virtual memory goes back to the days when memory was VERY scarce and VERY expensive, while disk was (relatively) cheap and fast. What's happened over time (> 30 years) is that memory has become incredibly fast and cheap. On the other hand, while disk size has increased by the same ratio (that's a guess), disk access times and speeds have NOT. Thus 30 years ago we could swap to disk with a penalty of say 100 times slower (a guess), today that same swap would be 10 000 000 times slower. Point is that if you have to swap in and out continually just to run, you're not going to have fun. (unless you like to watch paint dry).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.