LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   What are recommended sizes for partitions and swap during install? (https://www.linuxquestions.org/questions/slackware-14/what-are-recommended-sizes-for-partitions-and-swap-during-install-4175489166/)

74razor 12-27-2013 06:18 AM

Thank you so much guys for the incredible information... I'm going through the thread and am learning so much. It is great that you guys share your knowledge.

enorbet 12-28-2013 10:13 AM

Greetz
I can't help but wonder why the formula "Swap = X times Ram" ever existed and even more that it still persists today. First off, it should rightly be more like "Swap = Ram / X" since the more ram one has the less swap one needs. This of course assumes a strict definition of "swap" which essentially means, "When we are in danger of running out of ram, we can swap to hard drive to substitute for ram, to prevent crashes or killed processes".

There are so many myths and half-truths about swap and apparently it's always been like that, probably since someone just said "give me a stupidly simple rule of thumb for a size that will never come back to haunt us" and they thought..."Hmmm... I can buy a 100MB of hdd for the cost of 10MB of ram (back in the day) so nobody is going to complain too much if the cheaper solution results in never approaching cap... Make it HUGE!"

I recall guys buying drives to install swap on a separate drive for a performance boost, paying no mind to the fact that IDE drives and Windows Operating Systems (iirc NT4 was their first and then only with SCSI drives) did not have simultaneous access.

I further don't understand why anyone in this age of multi-terabyte drives anyone would tempt fate by installing any modern operating system to 15GB, especially with Slackware which, rightly, recommends "full install".

My Twapence? Get at least 4G Ram, make root at least 100G, make "/home" a separate 200G partition, waste 200MB partition on "/swap" if you just must and call it a day. You will never hate yourself for it.

TobiSGD 12-28-2013 11:20 AM

Quote:

Originally Posted by enorbet (Post 5088087)
My Twapence? Get at least 4G Ram, make root at least 100G, make "/home" a separate 200G partition, waste 200MB partition on "/swap" if you just must and call it a day. You will never hate yourself for it.

And then visit me and try the same on one of my SSDs, one with only 40GB and one with 120GB. And since this is a laptop, when you are at it, try to hibernate a machine with 4GB of RAM on a swap partition (which is by the way not mounted to /swap) of only 200MB.
Partition sizes (including swap) always are dependent on your specific use-case, there simply can't be a "one size fits all" solution to that.

jtsn 12-28-2013 01:55 PM

Quote:

Originally Posted by enorbet (Post 5088087)
I can't help but wonder why the formula "Swap = X times Ram" ever existed and even more that it still persists today.

The reason is memory overcommit. To make optimal use of the installed RAM, you want user processes to be able to allocate about two to three times more memory than physically installed in the machine and have that memory swap-backed.

Example: Every fork() allocates a lot of memory, but that is not really used, because the pages only get copied on write. So for the new process the memory is reserved in RAM or swapfile, but almost never really used. It must be present, the process could write to it at any time, if it wants to.

That is how it works, if you have a properly designed memory management system: If you don't enable swap, you effectively only can use one quarter of your installed RAM an the remainder is basically reserved and wasted. This is for example, how traditional Unices like Solaris work.

On Linux things are bit different: Linux allows you to allocate more memory than you have installed and the complete system violently breaks down, if you start to use that non-existent but allocated memory.

Why this? The Linux memory manager is basically designed for embedded systems, where you are usually short on RAM and have no swap at all. The "solution": Linux pretends that it has unlimited memory (otherwise embedded devices get expensive) as long as it doesn't get used and crashes if it suddenly runs out of RAM. For an embedded device (like a smartphone) this is not a big deal, an (automatic) reset/power-cycle and the device continues to work, mostly without the user even noticing.

For mission-critical servers this is a peril. So if you want "embedded-like" behavior, go on and leave out swap. If you want a reliable high-availability system, then you better make that "unlimited memory" assumption reality and add as much swap as possible, so the kernel doesn't even have a chance to run out of usable memory at any time, even at the most unlikely worst-case scenario.

enorbet 12-28-2013 02:58 PM

Quote:

Originally Posted by TobiSGD (Post 5088118)
And then visit me and try the same on one of my SSDs, one with only 40GB and one with 120GB. And since this is a laptop, when you are at it, try to hibernate a machine with 4GB of RAM on a swap partition (which is by the way not mounted to /swap) of only 200MB.
Partition sizes (including swap) always are dependent on your specific use-case, there simply can't be a "one size fits all" solution to that.

Heheh...yeah I realized much too late I forgot to specify non-hibernating which is probably a bias of mine since I see little value in hibernation, which is basically laptop specific (battery powered), right?. Why not just turn it all the way off?

enorbet 12-28-2013 03:46 PM

First, thank you for a succinct and pleasant response. Good read. I hope you don't mind a few questions.

Quote:

Originally Posted by jtsn (Post 5088178)
The reason is memory overcommit. To make optimal use of the installed RAM, you want user processes to be able to allocate about two to three times more memory than physically installed in the machine and have that memory swap-backed.

From where did this idea originate? There are machines made with many times the amount of ram they will, in practice, ever use. If we have a system that averages 10% usage of the installed ram, and never exceeds 70%, why do we need 3 to 4 times the maximum (the 70% of installed it never exceeds)? Is the remaining 30% not sufficient headroom? and for what?

Quote:

Originally Posted by jtsn (Post 5088178)
Example: Every fork() allocates a lot of memory, but that is not really used, because the pages only get copied on write. So for the new process the memory is reserved in RAM or swapfile, but almost never really used. It must be present, the process could write to it at any time, if it wants to.

That is how it works, if you have a properly designed memory management system: If you don't enable swap, you effectively only can use one quarter of your installed RAM an the remainder is basically reserved and wasted. This is for example, how traditional Unices like Solaris work.

The emboldened above is what I don't yet get. I can't see designing for an extreme that is rarely even approached, let alone reached, just because it, theoretically, could. I do understand badly designed systems with memory leakage, where ram is reserved and then effectively forgotten and reserved yet again but as you said "properly designed memory management system" I can't help but wonder why Solaris was ever so paranoid after so many years of actual use for a database of what really occurs.

Quote:

Originally Posted by jtsn (Post 5088178)
On Linux things are bit different: Linux allows you to allocate more memory than you have installed and the complete system violently breaks down, if you start to use that non-existent but allocated memory.

Why this? The Linux memory manager is basically designed for embedded systems, where you are usually short on RAM and have no swap at all. The "solution": Linux pretends that it has unlimited memory (otherwise embedded devices get expensive) as long as it doesn't get used and crashes if it suddenly runs out of RAM. For an embedded device (like a smartphone) this is not a big deal, an (automatic) reset/power-cycle and the device continues to work, mostly without the user even noticing.

I have never read nor realized this and it is fascinating to me. I don't recall Linus Torvalds ever saying he designed Linux especially for embedded systems or even based on the concept. It does seem particularly suited for embedded systems though.

So, in practice, since many not embedded systems are quite successful, even some mission critical ones, has this been due to disciplined "turning off the lights when you leave the room"? What are your (or anyone elses) thoughts on this?

Quote:

Originally Posted by jtsn (Post 5088178)
For mission-critical servers this is a peril. So if you want "embedded-like" behavior, go on and leave out swap. If you want a reliable high-availability system, then you better make that "unlimited memory" assumption reality and add as much swap as possible, so the kernel doesn't even have a chance to run out of usable memory at any time, even at the most unlikely worst-case scenario.

This is especially interesting! I have never had the pleasure of running "big iron" on AIX or the like, but I have had some experience with IBM's OS/2 and I'm fairly proud of a small system (around 80 workstations, back in the 90s) based around OS/2 v2.1 that I helped setup and run that never had an unscheduled reboot in the 2 years I was associated with it. IIRC it had 4G Ram, and yes, 1G swap too, but I always wondered about it's value when it was so rarely "hit". Back then, a 1G raid just for swap seemed a huge and possibly unnecessary expense. The customer was very pleased though.

I recall also being proud of the fact that OS/2, for a few years anyway, was the OS of choice for Banks, ATMs (the money machines), Hospitals, Air Flight Control etc. While living near Norfolk, Virginia, I recall laughing when the US Navy launched a Windows NT powered ship that in less than 2 days had to be towed back in. We joked a lot about the new meaning of Blue Screen of Death.

Although I can see the value of increasing swap according to the both the degree of cost of failure and the likelihood of such failure, it remains that if one monitors swap usage I have never been exposed to a system that actually needs more swap than ram (again, excepting hibernation but I am unconcerned with laptops). In fact, Ive never seen a need for swap equal to ram unless the designer for some odd reason chose to constrain the system.

I'd really like to hear more on this if OP doesn't mind.

TobiSGD 12-28-2013 05:46 PM

Quote:

Originally Posted by enorbet (Post 5088197)
Heheh...yeah I realized much too late I forgot to specify non-hibernating which is probably a bias of mine since I see little value in hibernation, which is basically laptop specific (battery powered), right?. Why not just turn it all the way off?

When hibernating, unlike with suspend, the machine is completely turned off. This is why the content of the RAM and the state of the machine has to be saved to a non-volatile memory in the first place.

jtsn 12-29-2013 01:16 AM

Quote:

Originally Posted by enorbet (Post 5088210)
I can't help but wonder why Solaris was ever so paranoid after so many years of actual use for a database of what really occurs.

It is not only Solaris, even Windows requires memory allocations at least to be swap-backed. OTOH Linux allows you to allocate > 1 TB on a 8 GB machine, if that memory stays unused. See this article:

http://opsmonkey.blogspot.de/2007/01...vercommit.html

This article also describes a workaround called the "OOM killer", because once you granted a memory allocation to an application, you can't refuse it retroactively later.

Quote:

I have never read nor realized this and it is fascinating to me. I don't recall Linus Torvalds ever saying he designed Linux especially for embedded systems or even based on the concept. It does seem particularly suited for embedded systems though.
In the early days Linux was designed for hobbyist machines with their usual RAM shortage. Overcommit allowed you to start more processes without having to add expensive memory. Disk-paging (swap) functionality got added at a later stage IIRC.

Quote:

So, in practice, since many not embedded systems are quite successful, even some mission critical ones, has this been due to disciplined "turning off the lights when you leave the room"? What are your (or anyone elses) thoughts on this?
As you can read in this thread, most people handle the issue by throwing hardware at it, once the machine "slows to crawl" (due to heavy paging activity). The concept of gracefully handling out-of-memory conditions inside the application setup seems to be mostly unknown.

Quote:

Although I can see the value of increasing swap according to the both the degree of cost of failure and the likelihood of such failure, it remains that if one monitors swap usage I have never been exposed to a system that actually
needs more swap than ram (again, excepting hibernation but I am unconcerned with laptops). In fact, Ive never seen a need
for swap equal to ram unless the designer for some odd reason chose to constrain the system.
In a correct designed setup new memory allocations (including creating new processes) should be rejected, once the OS is out of physical memory and heavily starts using swap. You don't want allocations to be rejected much earlier (means RAM wasted), but not much later (means machine wouldn't stop writing to swap anymore). On traditional OSes without (!) overcommit you usually get this result by having Swap = RAM x 2.5. From there this formula originates from.

chrisretusn 12-29-2013 01:59 AM

I'll just keep it simple.

This is my main system.
[code]~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 47G 14G 33G 30% /
/dev/sda3 882G 133G 750G 16% /home
I run my root with around 50G plus or minus a few G's. The only thing I separate is /home.

I have a 3G swap, simply because it just feels right to have one. Since this machine has 8G of memory I rarely get into "swap space".

enorbet 12-29-2013 05:55 AM

Quote:

Originally Posted by TobiSGD (Post 5088240)
When hibernating, unlike with suspend, the machine is completely turned off. This is why the content of the RAM and the state of the machine has to be saved to a non-volatile memory in the first place.

It is my understanding that while theoretically so, this is functionally non-existent since the only reason for the utility to back up (and return to) the previous state is for settings like "wake on lan" so an unattended "wake up" can return an item to service. I can see no net value in having the previous state written to swap unless an unattended wake up is desirable.

Since the ATX standard for motherboards and power supplies defaults to this condition of a small amount of power to the system, the distinction does get fuzzier. One must flip the Power Supply switch to actually and completely "power down". There is even some redundancy here as many, if not most, apps keep a "history" of the last state before the app is terminated. This existed back in AT days as well.

This seems to have been expanded since ATX as with browsers with tabs, where one can choose to terminate the parent app while retaining the history of children active. This is done through cache which is essentially an app-specific swap file.

Imagine a system in which all apps had a "previous state" cache/swapfile. Perhaps the degree to which this occurs demonstrates some of the redundancy which is possibly unnecessarily duplicated.

allend 12-29-2013 06:14 AM

Quote:

It is my understanding that while theoretically so, this is functionally non-existent since the only reason for the utility to back up (and return to) the previous state is for settings like "wake on lan" so an unattended "wake up" can return an item to service. I can see no net value in having the previous state written to swap unless an unattended wake up is desirable.
Sorry mate, but I really, really like hibernation on my netbook. The minutes saved by avoiding a cold boot may seem trivial, but can make the difference between a toy and a productivity tool.

enorbet 12-29-2013 06:19 AM

I quoted the below since it is a propos and interesting but also because I'm wondering if the bold part is a typo.

Quote:

Originally Posted by ArchWiki
(found Here )
About swap partition/file size

Even if your swap partition is smaller than RAM, you still have a big chance of hibernating successfully. According to kernel documentation:

/sys/power/image_size controls the size of the image created by the suspend-to-disk mechanism. It can be written a string representing a non-negative integer that will be used as an upper limit of the image size, in bytes. The suspend-to-disk mechanism will do its best to ensure the image size will not exceed that number. However, if this turns out to be impossible, it will try to suspend anyway using the smallest image possible. In particular, if "0" is written to this file, the suspend image will be as small as possible. Reading from this file will display the current image size limit, which is set to 2/5 of available RAM by default.

or is that just coincidence?

I should reveal that much of my focus lately on swap (both system wide and app-specific) has to do with the changes I already see with Linuxes using some variation of systemd. The Shutdown process is so fast that I can't yet imagine there is sufficient wait time for proper switch to read-only processes, just to name one.

Underlying that is a concern for too much redundancy and my history of monitoring system swap file hits and seeing 0-3% common, even on sizes as small as 200MB. Mostly I suppose it is idle curiosity but this stuff fascinates me :)

TobiSGD 12-29-2013 07:40 AM

Quote:

Originally Posted by enorbet (Post 5088429)
It is my understanding that while theoretically so, this is functionally non-existent since the only reason for the utility to back up (and return to) the previous state is for settings like "wake on lan" so an unattended "wake up" can return an item to service. I can see no net value in having the previous state written to swap unless an unattended wake up is desirable.

You mean saving the the complete state of the system, including open tabs in a browser and their position on the page, videoplayers at the exact position they are currently playing, the document you are working at with the cursor at his exact position, and so on, has no value in a world of battery powered mobile devices?
I guess many people see this different. Oh, and I can assure you that hibernate does not only work theoretically, but also in practice, as it is intended, with a completely powered down machine.

Bertman123 12-29-2013 10:00 AM

I have a 500gb hard drive with an 8gb swap partition, 100gb root partition, and the rest for /home. Probably don't need the swap partition with 4gb of ram, but it's habit now to set it up that way during an install.

273 12-29-2013 10:10 AM

Quote:

Originally Posted by enorbet (Post 5088087)
... make root at least 100G,...

What? At least 100GB? I've a kitchen-sink install of Debian (the king of dependency installers) using 17.3GB of a 30GB drive. 100GB is pointless -- I do concede it may not really be missed from a 1TB drive but anything over 20GB is assuming a deliberate attempt to install everything and anything over 30GB is displaying a perverse will to make 99% of people's root partition look small.


All times are GMT -5. The time now is 06:06 PM.