LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   What are recommended sizes for partitions and swap during install? (https://www.linuxquestions.org/questions/slackware-14/what-are-recommended-sizes-for-partitions-and-swap-during-install-4175489166/)

273 12-25-2013 08:21 PM

Quote:

Originally Posted by 273 (Post 5086957)
Thanks for the clarification, since I don't have swap I feel better using tmpfs.

Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever? If so that sounds like a better idea to me if using an SSD as you can flood your memory then use swap for processes whilst you work out which process to kill to regain your RAM.

jtsn 12-25-2013 08:34 PM

Quote:

Originally Posted by mlslk31 (Post 5086958)
They're still there, just not as obvious as it is on Windows. After a while, though, a speed gain can still be had from doing a backup/zero/format/restore cycle.

Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.

Quote:

That "while" can be a year or more, though, if I manage major deletes responsibly.
So you actually delete and rebuild your filesystems after they reached their optimal configuration?

Quote:

Some filesystems (XFS, ext4, and btrfs) have defragmentation tools, but some filesystems (JFS, NILFS2, F2FS, and some more) don't have defragmentation tools.
You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.

*) Files above 8 MB can't even be stored in a single fragment.

jtsn 12-25-2013 08:39 PM

Quote:

Originally Posted by 273 (Post 5086959)
Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever?

Tmpfs will use RAM occupied by the buffer cache without writing its content out to disk, because its temporary anyway. But it gives the kernel the option to use swap in a low-memory situation. Another good reason for tmpfs is its built-in size limit (defaults to half RAM size).

Ramfs won't use swap ever, it completely blocks the occupied memory.

273 12-25-2013 08:45 PM

Thanks, god to know.

mlslk31 12-25-2013 09:01 PM

Quote:

Originally Posted by jtsn (Post 5086963)
So you actually delete and rebuild your filesystems after they reached their optimal configuration?

Sometimes, sometimes not. If I had one of my adventures where I just wanted to compile one non-Slackware thing, and it asked me to recompile half the system and add 35 prerequisites, I'll probably back up and restore once everything is deemed to be *perfect* and stay perfect for a month. For a system that's been running perfect for a year, though, I'll probably just run defrag and back it up, maybe even restore to another partition for use as an alternate/rescue system.

ttk 12-26-2013 01:38 AM

I haven't used a swap partition in years. If you have enough RAM to facilitate installation (used to be 1GB, but that might be stale), you can get through it with no swap, and install swap files after installation.

How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.

Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong. This goes back to the question of which applications you expect to run.

If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap. For instance, if you need to open very large excel spreadsheets in OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have, then that sucks, but needs to get done. If you let firefox run for weeks (like I do), you can depend on swap to alert you to when you've hit the limits of your memory, and tell firefox to shutdown and restart gracefully. It totally depends on what you need to do.

My usual mode of operation is to use no swap partition, then create a swapfile equal to memory size in /, activating it in /etc/rc.d/rc.local like so:

Code:

find /swapfile.? -exec swapon {} \;
.. and then create more swapfiles as needed, before running OpenOffice or whatever, like so:

Code:

dd if=/dev/zero of=/swapfile.3 bs=65536 count=262144
chmod 600 /swapfile.3
mkswap /swapfile.3
swapon /swapfile.3

(to create, format, and activate a 16GB swapfile).

The line in rc.local assures that all such swapfiles will be activated next time the system boots, too.

I've creates swapfiles 10x or more the size of physical memory, for accommodating piggish datamining processes, but usually 2x to 2.5x physical memory size is all a desktop needs.

ReaperX7 12-26-2013 02:39 AM

Quote:

Originally Posted by jtsn (Post 5086963)
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.


So you actually delete and rebuild your filesystems after they reached their optimal configuration?


You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.

*) Files above 8 MB can't even be stored in a single fragment.

Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.

Plus defragmentation on some file systems requires the file system be taken offline and dismounted.

Often it's just best to run a simple file system integrity check and check for errors and fix as needed. If you need to defragment, only defragment the user files.

jefro 12-26-2013 09:30 AM

There really isn't a swap file formula anymore. Since ram is so cheap, one can decide if they want or need swap. Swap is mandatory of you need hibernate. Otherwise, you use your tested ram use to decide or expected load to decide. In either case, a swap partition or swap file/raid file or even priority set swap may be needed.

Many people still caution swap on a ssd. Not sure I have an opinion. I might avoid swap.

/home is useful for backup

/boot may be needed on larger disk's.

/var is traditional but I rarely use it in home setup.

At one time a thought of partitions as speeding up access was pretty common and maybe some valid tests. It sill might prove true but on a ssd, not sure one could ever measure it.

Richard Cranium 12-26-2013 12:28 PM

Quote:

Originally Posted by jtsn (Post 5086963)
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.

This is complete garbage.

You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level. For that to be even remotely feasible, the OS would have to understand the manner in which you access your data. For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.

273 12-26-2013 12:31 PM

Since we're straying into that territory I hope I'm not too off topic to ask this question here:
Am I right in thinking that fragmentation isn't an issue at all on SSDs? I've always assumed it to be the case fmr what I understand but does anyone know for certain?

ReaperX7 12-26-2013 04:29 PM

SSDs actually will suffer from fragmentation, but defragmenting them reduces their lifespan by the constant read/writes to the cells. There are other methods of preventing fragmentation with SSDs such as specialized file system journaling techniques that keeps files in their place while caching them to a standard HDD for read/write, then write back to SSD only as needed.

qweasd 12-26-2013 08:13 PM

Quote:

Originally Posted by 74razor (Post 5086924)
Just curious, I plan on making a swap partition and /, /home, and /var on separate partitions. I've went through the wiki but I don't see much as far as recommended sizes. I have a 160 GB SSD.

You don't have a lot of space, and chopping always wastes space. You will waste 15 GiB or so on just the root (because if you don't, you will risk hitting the cap). A few more GiB on /var. You can waste any amount of space on /tmp, if you elect to have one. So I would recommend two partitions: swap (1.1 RAM size if you want to hibernate, and from 0 to 0.5 if you don't); and the rest for the root. Just monitor the available disk space (which you should do anyway), and you will be fine.

jtsn 12-26-2013 09:14 PM

Quote:

Originally Posted by ReaperX7 (Post 5087058)
Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.

Nothing must be defragmented at all. That's the whole point of having a filesystem more advanced than NTFS or HFS+.

jtsn 12-26-2013 09:32 PM

Quote:

Originally Posted by Richard Cranium (Post 5087223)
This is complete garbage.

You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level.

It only changes file fragment locations on write accesses to that files. So after using a filesystem for a while, the on-disk result of a good designed filesystem is the optimum for that specific use pattern, while the performance of a bad filesystem (like NTFS or HFS+) gets worse on every write access.

Technology, that's advanced enough, that you don't understand it, may look like magic to you. ;) Again, Unix filesystems fragment files on purpose. There is no point in defragmenting them nor it is possible for file sizes above 8 MB.

Quote:

For that to be even remotely feasible, the OS would have to understand the manner in which you access your data.
And that's what the OS does. But it is not watching you as an egoistic user. The goal is to maximize overall throughput for all users and processes.

Quote:

For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.
No, as a multi-tasking OS you don't want anything about a single file. You want the the files accessed in parallel together in the same block group. You want be able to do elevator seeks to reduce latency, so you want evenly filled block groups, so bigger files get fragmented and scattered over the block groups on purpose. All that requirements are accommodated even by a simple Linux filesystem like ext2.

Sorry, but PC magazine folklore from the DOS stone age doesn't apply here.

jtsn 12-26-2013 10:02 PM

Quote:

Originally Posted by ttk (Post 5087048)
How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.

Actually, on Unix with a correct designed memory manager (like Solaris) you want swap to make optimal use of the installed memory. Otherwise it stays mostly unused.

Quote:

Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong.
The sole existence of the OOM is a hint, that something is very wrong with the virtual memory manager of Linux: It is giving out memory to applications, that it just doesn't have (neither in RAM nor swap) and crashes them, if they start to use it...

Quote:

If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap.
Swap doesn't mean anything slows to crawl, just have a look at this OpenWRT system:

Code:

$ free
            total        used        free      shared      buffers
Mem:        29212        26696        2516            0        1472
-/+ buffers:              25224        3988
Swap:      262140        6844      255296

It's just running fine, nothing crawls. Having swap frees up some memory, so the performance is actually better than without it. Of course, there is no heavy swap activity, just unused stuff paged out.

Quote:

OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have,
If an user application can use more memory than installed in a machine, the system administrator forgot to set the resource limits correctly.


All times are GMT -5. The time now is 07:06 PM.