LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices



Reply
 
Search this Thread
Old 01-11-2008, 04:35 PM   #1
dlublink
Member
 
Registered: Oct 2004
Location: Canada
Distribution: Ubuntu
Posts: 329

Rep: Reputation: 30
Swap Thrashing, can nothing be done?


Hello,

I have now been using Linux as my preferred desktop for about 18 months and my preferred server OS since 2003.

I find that Linux does not handle swapping very well. I have previously asked what can be done about swap thrashing and was told I need more memory.

Trouble is, that is not a good solution.

It happens on my laptop about once a week that it starts swap thrashing, a few moments ago the gtk-gnash program decided it needed 800 megabytes of data. In order to accommodate this program, my computer became unusable for about 10 minutes.

Earlier today I opened a 135 megabyte document in OpenOffice, my computer became unusable again.

The culprit? Swap thrashing.

This happens on my servers sometimes as well. A few weeks ago an application on one of our servers had a serious memory leak and caused the server to start thrashing. The server became unusable. I couldn't ssh to kill the software, I couldn't do anything at all!

I understand that hard disks are much slower than memory ( I think in the case of my computer the difference is a few hundred times), but that doesn't justify making the entire computer unusable because one program is freaking out.

The only information I found on controlling swap is using the proc interface to the swappiness variable ( is that the right term ?). But in my case, it doesn't help.

Why does Linux make my entire computer wait when it is swap thrashing instead of just making the program that is using too much memory wait?

In the case of my problem with gtk-gnash, why didn't Linux make gtk-gnash wait while the rest of the computer continue working?

Same question for OpenOffice.

I agree it could be argued that it would be too slow to wait for the memory hogging application, but is it better to wait 10 minutes on a totally frozen computer than 15 minutes for a single application? With the latter I can continue working and can even choose to cancel the program that is giving trouble.

Additionally, other than some 3d rendering software and some high end games, does anyone really need more than 200 megabytes of active memory? I say give the program up to a certain amount of memory ( should be controlable using a similar interface as swappiness), if the program wants more than that, it can swap out the rest of it's memory needs.

A program that is using 700 megabytes of memory, how much of that memory is actively being used? Why not swap 675 megabytes of that memory and let all the other applications on Linux continue to run nicely.

Why should Linux swap out Firefox, and all my running programs so that a single memory hogging program can crash my computer with a memory leak?

I understand that I need to have enough memory to run my machine, I am not arguing that. What I am arguing is that when exceptional circumstances happen, my Linux machine should not be brought to it's knees. Running Windows 98 on 32 megabytes of RAM was slow, it was unbearable, but at least it wasn't unusable! I understand that Win98 doesn't have as much functionality as modern Linux, but what I understand from Windows 98 is that even in the worst pagefile thrashing moments, I could still chat on MSN, open my favorite web page and kill applications that were bothering me.

I love Linux, but I think that Linux has an Achilles heel and that it is the problem with swapping.

What do you think? Anybody want to comment? Any developers want to comment?

( please don't post and ask me how much memory I have or that I should go out and buy more memory, say something useful or nothing at all. I can't install enough memory to cope with every possible memory leaking application I might ever face)

David
 
Old 01-11-2008, 04:52 PM   #2
pljvaldez
Guru
 
Registered: Dec 2005
Location: Somewhere on the String
Distribution: Debian Squeeze (x86)
Posts: 6,092

Rep: Reputation: 269Reputation: 269Reputation: 269
I've never run into this problem, but the only suggestion I can make is maybe see if you can tune your hard disk a bit (make sure DMA is enabled, use hdparm, etc) to increase disk I/O speed.

Also, are you using a swap file or a swap partition? And are you running up against the swap size limit as well when this happens? If so, maybe increasing the size of swap would help.

Sorry I don't have any concrete solutions for you...
 
Old 01-11-2008, 04:57 PM   #3
pljvaldez
Guru
 
Registered: Dec 2005
Location: Somewhere on the String
Distribution: Debian Squeeze (x86)
Posts: 6,092

Rep: Reputation: 269Reputation: 269Reputation: 269
Here's an article about optimizing memory usage. Maybe there's some suggestions there also...

Oh, and read the part on swappiness in this Gentoo article...

Oh, and I seem to recall a utility called nice that I think you could launch applications with a lower priority...

Also, try googling "Linux Performance Tuning", I see a lot of articles that maybe you can find a nugget or two.

Last edited by pljvaldez; 01-11-2008 at 05:06 PM.
 
Old 01-11-2008, 11:09 PM   #4
sundialsvcs
Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 5,455

Rep: Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172
Not the most up-to-date info, but extremely well-written and complete (circa 2004)...

http://www.redhat.com/magazine/001nov04/features/vm/

There is also the ulimit command (described in man bash), and there are various limits in places like /etc/limits and /etc/security/limits.conf.

See: http://www.ronaldchristian.com/artic...60128170128242

As a matter of general principle, there are three ways that you can approach this issue: on a system-level (the system's response to the load presented to it), on a per-process level (the load that a particular process is allowed to present), or both.

Bear in mind that a process may allocate a large chunk of RAM and never use most of it. Lots of processes do that, and for the most part it doesn't cost anything. So you need to be judicious with your approach.

Yes, you are correct... with the commonly-used default settings, a Linux system can definitely be brought to its knees. But that's basically because the default settings are easy and generous, assuming that serious overload is unlikely. They do not have to be generous, however.

Last edited by sundialsvcs; 01-11-2008 at 11:21 PM.
 
Old 01-11-2008, 11:22 PM   #5
jailbait
Guru
 
Registered: Feb 2003
Location: Blue Ridge Mountain
Distribution: Debian Wheezy, Debian Jessie
Posts: 7,591

Rep: Reputation: 188Reputation: 188
Linux does not use swapping it uses paging. So swapping is a misnomer in Linux. In swapping the system moves one application out to pasture on the swap disk and lets everything else run. Thus all of your questions are valid criticisms of a swap system not working correctly. However Linux pages, not swaps. In paging the operating system moves the pages that have been inactive the longest to the page disk. Hopefully this will have little or no impact on running programs because they probably are not going to use the paged-out pages anytime soon. Thus in a paging system all running programs feel the memory shortage equally.

When you get into a real tight memory situation like you are experiencing then the solution is to solve the underlying memory shortage. Fiddling with the paging algorithm (or swap algorithm as Linux mislabels it) generally makes the problem worse. So the solution is to either buy more ram or to reduce your peak memory demands.

--------------------------
Steve Stites

Last edited by jailbait; 01-11-2008 at 11:24 PM.
 
1 members found this post helpful.
Old 01-13-2008, 06:34 PM   #6
sundialsvcs
Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 5,455

Rep: Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172Reputation: 1172
Yes, the word "swapping" has had two distinct meanings in the history of operating systems, but these days it's all routinely handled by just one kernel process. The essential meaning, as the terms are used (intechangeably...) today, remains the same. What you say is, of course, true.

If you have a "run-away process," which is not only requesting gobs of memory but using it, you might need to impose hard limits upon such a process; to kill it or to cause its memory-allocation requests to fail. But in real life that is pretty rare.

One place where I did see this sort of thing bring a system to its knees was with a program that loaded about 1.5 million record-IDs into a Perl hash from an indexed file, then sorted the hash (doubling the number of keys), then used it to access the records from that same file(!) sequentially. I quietly pointed out to the programmer in question that an indexed file will quite happily return its keys to you in whatever order you like, and the runtime of this program promptly dropped by about eightfold.

Categorically, I agree with your assessment that "chips are cheap" and that you should buy and install everything that your motherboard allows. There is nothing that affects a system more seriously than memory-shortage, and nothing that's easier to fix with a few coins of the realm.

In the days when "disk drives" were boxes that looked rather like washing-machines, we said a thrashing disk-drive was "in Maytag mode." {Maytag is a maker of washing-machines, and the drive looked like it was in the 'spin' cycle.} Once, such a drive wobbled its way across the raised-floor and dropped one leg into a nearby hole! Oops.

Last edited by sundialsvcs; 01-13-2008 at 06:38 PM.
 
Old 08-18-2010, 08:03 AM   #7
jordg
LQ Newbie
 
Registered: Jul 2004
Posts: 1

Rep: Reputation: 0
The easy(best) way is to turn swap off completely.

swapoff -a
Then edit /etc/fstab and comment out the swap line

It has been recommended to have a 32MB swap so only a few files stay swapped out but in reality you are better off without it.
I have been caught too many times to bother with it. If you need more memory get more RAM. A 64Bit system can handle more ram than you can buy. Unfortunately the top of the range today is 8GB but with 4 of those you get 32GB.

However every solution has its own problems.
Beware that when a system is fully populated with RAM heat may be an issue. One particular Server was fully populated with 32GB of 2GB modules, but the cooling and sensors overlooked the RAM (Great design) and the system kept on cooking RAM modules. Ended up getting lower voltage RAM at the expense of the vendor.

Maybe you could use an SSD HDD for faster thrashing
 
Old 08-08-2013, 06:55 AM   #8
tobixen
LQ Newbie
 
Registered: Aug 2013
Posts: 1

Rep: Reputation: Disabled
The thrashing-issue has been annoying me for as long as I've been using Linux. "Buy more memory" just doesn't solve the problem; the fact is that any malicious, incompetent, unlucky or careless user can thrash a "normal" linux system so hard that it's needed to reboot it "physically". Only options are to avoid having swap at all, or setting hard quotas on memory consumption, i.e. through ulimit or cgroups. I don't like those solutions as it causes bad resource utilization. Swap is a good thing if there is really inactive pages occupying memory.

An old, "simple-stupid idea" of mine was to monitor the situation from user space and temporary suspend the processes having most page faults. During the last few days this issue has been annoying me enough so I made a prototype - https://github.com/tobixen/thrash-protect

This simple script has already (after running it in production for one working day) saved me from at least two logins to remote interfaces, and (more importantly) kept the server and most of the processes going. Some few processes was suspended for a while.
 
  


Reply

Tags
frozen, swap, unusable


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
hard drive is thrashing after SUSE install mtdew3q Suse/Novell 4 02-13-2007 11:59 PM
myspace.com + firefox == thrashing? R00ts General 15 02-16-2006 02:00 AM
SuSE 9.0 thrashing the HDD mikedderuki Linux - Software 2 01-21-2004 09:17 AM
disk thrashing at 4am Pete Dogg Red Hat 7 11-13-2003 05:43 PM
Disc Thrashing dmedici Mandriva 3 08-29-2003 03:02 PM


All times are GMT -5. The time now is 11:14 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration