Thank you so much guys for the incredible information... I'm going through the thread and am learning so much. It is great that you guys share your knowledge.
|
Greetz
I can't help but wonder why the formula "Swap = X times Ram" ever existed and even more that it still persists today. First off, it should rightly be more like "Swap = Ram / X" since the more ram one has the less swap one needs. This of course assumes a strict definition of "swap" which essentially means, "When we are in danger of running out of ram, we can swap to hard drive to substitute for ram, to prevent crashes or killed processes". There are so many myths and half-truths about swap and apparently it's always been like that, probably since someone just said "give me a stupidly simple rule of thumb for a size that will never come back to haunt us" and they thought..."Hmmm... I can buy a 100MB of hdd for the cost of 10MB of ram (back in the day) so nobody is going to complain too much if the cheaper solution results in never approaching cap... Make it HUGE!" I recall guys buying drives to install swap on a separate drive for a performance boost, paying no mind to the fact that IDE drives and Windows Operating Systems (iirc NT4 was their first and then only with SCSI drives) did not have simultaneous access. I further don't understand why anyone in this age of multi-terabyte drives anyone would tempt fate by installing any modern operating system to 15GB, especially with Slackware which, rightly, recommends "full install". My Twapence? Get at least 4G Ram, make root at least 100G, make "/home" a separate 200G partition, waste 200MB partition on "/swap" if you just must and call it a day. You will never hate yourself for it. |
Quote:
Partition sizes (including swap) always are dependent on your specific use-case, there simply can't be a "one size fits all" solution to that. |
Quote:
Example: Every fork() allocates a lot of memory, but that is not really used, because the pages only get copied on write. So for the new process the memory is reserved in RAM or swapfile, but almost never really used. It must be present, the process could write to it at any time, if it wants to. That is how it works, if you have a properly designed memory management system: If you don't enable swap, you effectively only can use one quarter of your installed RAM an the remainder is basically reserved and wasted. This is for example, how traditional Unices like Solaris work. On Linux things are bit different: Linux allows you to allocate more memory than you have installed and the complete system violently breaks down, if you start to use that non-existent but allocated memory. Why this? The Linux memory manager is basically designed for embedded systems, where you are usually short on RAM and have no swap at all. The "solution": Linux pretends that it has unlimited memory (otherwise embedded devices get expensive) as long as it doesn't get used and crashes if it suddenly runs out of RAM. For an embedded device (like a smartphone) this is not a big deal, an (automatic) reset/power-cycle and the device continues to work, mostly without the user even noticing. For mission-critical servers this is a peril. So if you want "embedded-like" behavior, go on and leave out swap. If you want a reliable high-availability system, then you better make that "unlimited memory" assumption reality and add as much swap as possible, so the kernel doesn't even have a chance to run out of usable memory at any time, even at the most unlikely worst-case scenario. |
Quote:
|
First, thank you for a succinct and pleasant response. Good read. I hope you don't mind a few questions.
Quote:
Quote:
Quote:
So, in practice, since many not embedded systems are quite successful, even some mission critical ones, has this been due to disciplined "turning off the lights when you leave the room"? What are your (or anyone elses) thoughts on this? Quote:
I recall also being proud of the fact that OS/2, for a few years anyway, was the OS of choice for Banks, ATMs (the money machines), Hospitals, Air Flight Control etc. While living near Norfolk, Virginia, I recall laughing when the US Navy launched a Windows NT powered ship that in less than 2 days had to be towed back in. We joked a lot about the new meaning of Blue Screen of Death. Although I can see the value of increasing swap according to the both the degree of cost of failure and the likelihood of such failure, it remains that if one monitors swap usage I have never been exposed to a system that actually needs more swap than ram (again, excepting hibernation but I am unconcerned with laptops). In fact, Ive never seen a need for swap equal to ram unless the designer for some odd reason chose to constrain the system. I'd really like to hear more on this if OP doesn't mind. |
Quote:
|
Quote:
http://opsmonkey.blogspot.de/2007/01...vercommit.html This article also describes a workaround called the "OOM killer", because once you granted a memory allocation to an application, you can't refuse it retroactively later. Quote:
Quote:
Quote:
|
I'll just keep it simple.
This is my main system. [code]~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 47G 14G 33G 30% / /dev/sda3 882G 133G 750G 16% /home I run my root with around 50G plus or minus a few G's. The only thing I separate is /home. I have a 3G swap, simply because it just feels right to have one. Since this machine has 8G of memory I rarely get into "swap space". |
Quote:
Since the ATX standard for motherboards and power supplies defaults to this condition of a small amount of power to the system, the distinction does get fuzzier. One must flip the Power Supply switch to actually and completely "power down". There is even some redundancy here as many, if not most, apps keep a "history" of the last state before the app is terminated. This existed back in AT days as well. This seems to have been expanded since ATX as with browsers with tabs, where one can choose to terminate the parent app while retaining the history of children active. This is done through cache which is essentially an app-specific swap file. Imagine a system in which all apps had a "previous state" cache/swapfile. Perhaps the degree to which this occurs demonstrates some of the redundancy which is possibly unnecessarily duplicated. |
Quote:
|
I quoted the below since it is a propos and interesting but also because I'm wondering if the bold part is a typo.
Quote:
I should reveal that much of my focus lately on swap (both system wide and app-specific) has to do with the changes I already see with Linuxes using some variation of systemd. The Shutdown process is so fast that I can't yet imagine there is sufficient wait time for proper switch to read-only processes, just to name one. Underlying that is a concern for too much redundancy and my history of monitoring system swap file hits and seeing 0-3% common, even on sizes as small as 200MB. Mostly I suppose it is idle curiosity but this stuff fascinates me :) |
Quote:
I guess many people see this different. Oh, and I can assure you that hibernate does not only work theoretically, but also in practice, as it is intended, with a completely powered down machine. |
I have a 500gb hard drive with an 8gb swap partition, 100gb root partition, and the rest for /home. Probably don't need the swap partition with 4gb of ram, but it's habit now to set it up that way during an install.
|
Quote:
|
All times are GMT -5. The time now is 06:06 PM. |