LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-25-2013, 09:21 PM   #16
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373

Quote:
Originally Posted by 273 View Post
Thanks for the clarification, since I don't have swap I feel better using tmpfs.
Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever? If so that sounds like a better idea to me if using an SSD as you can flood your memory then use swap for processes whilst you work out which process to kill to regain your RAM.
 
Old 12-25-2013, 09:34 PM   #17
jtsn
Member
 
Registered: Sep 2011
Posts: 925

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
Quote:
Originally Posted by mlslk31 View Post
They're still there, just not as obvious as it is on Windows. After a while, though, a speed gain can still be had from doing a backup/zero/format/restore cycle.
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.

Quote:
That "while" can be a year or more, though, if I manage major deletes responsibly.
So you actually delete and rebuild your filesystems after they reached their optimal configuration?

Quote:
Some filesystems (XFS, ext4, and btrfs) have defragmentation tools, but some filesystems (JFS, NILFS2, F2FS, and some more) don't have defragmentation tools.
You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.

*) Files above 8 MB can't even be stored in a single fragment.
 
Old 12-25-2013, 09:39 PM   #18
jtsn
Member
 
Registered: Sep 2011
Posts: 925

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
Quote:
Originally Posted by 273 View Post
Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever?
Tmpfs will use RAM occupied by the buffer cache without writing its content out to disk, because its temporary anyway. But it gives the kernel the option to use swap in a low-memory situation. Another good reason for tmpfs is its built-in size limit (defaults to half RAM size).

Ramfs won't use swap ever, it completely blocks the occupied memory.
 
1 members found this post helpful.
Old 12-25-2013, 09:45 PM   #19
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Thanks, god to know.
 
Old 12-25-2013, 10:01 PM   #20
mlslk31
Member
 
Registered: Mar 2013
Location: Florida, USA
Distribution: Slackware, FreeBSD
Posts: 210

Rep: Reputation: 77
Quote:
Originally Posted by jtsn View Post
So you actually delete and rebuild your filesystems after they reached their optimal configuration?
Sometimes, sometimes not. If I had one of my adventures where I just wanted to compile one non-Slackware thing, and it asked me to recompile half the system and add 35 prerequisites, I'll probably back up and restore once everything is deemed to be *perfect* and stay perfect for a month. For a system that's been running perfect for a year, though, I'll probably just run defrag and back it up, maybe even restore to another partition for use as an alternate/rescue system.
 
Old 12-26-2013, 02:38 AM   #21
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,039
Blog Entries: 27

Rep: Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485Reputation: 1485
I haven't used a swap partition in years. If you have enough RAM to facilitate installation (used to be 1GB, but that might be stale), you can get through it with no swap, and install swap files after installation.

How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.

Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong. This goes back to the question of which applications you expect to run.

If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap. For instance, if you need to open very large excel spreadsheets in OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have, then that sucks, but needs to get done. If you let firefox run for weeks (like I do), you can depend on swap to alert you to when you've hit the limits of your memory, and tell firefox to shutdown and restart gracefully. It totally depends on what you need to do.

My usual mode of operation is to use no swap partition, then create a swapfile equal to memory size in /, activating it in /etc/rc.d/rc.local like so:

Code:
find /swapfile.? -exec swapon {} \;
.. and then create more swapfiles as needed, before running OpenOffice or whatever, like so:

Code:
dd if=/dev/zero of=/swapfile.3 bs=65536 count=262144
chmod 600 /swapfile.3
mkswap /swapfile.3
swapon /swapfile.3
(to create, format, and activate a 16GB swapfile).

The line in rc.local assures that all such swapfiles will be activated next time the system boots, too.

I've creates swapfiles 10x or more the size of physical memory, for accommodating piggish datamining processes, but usually 2x to 2.5x physical memory size is all a desktop needs.
 
1 members found this post helpful.
Old 12-26-2013, 03:39 AM   #22
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,564
Blog Entries: 15

Rep: Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118
Quote:
Originally Posted by jtsn View Post
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.


So you actually delete and rebuild your filesystems after they reached their optimal configuration?


You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.

*) Files above 8 MB can't even be stored in a single fragment.
Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.

Plus defragmentation on some file systems requires the file system be taken offline and dismounted.

Often it's just best to run a simple file system integrity check and check for errors and fix as needed. If you need to defragment, only defragment the user files.
 
Old 12-26-2013, 10:30 AM   #23
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,228

Rep: Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651Reputation: 3651
There really isn't a swap file formula anymore. Since ram is so cheap, one can decide if they want or need swap. Swap is mandatory of you need hibernate. Otherwise, you use your tested ram use to decide or expected load to decide. In either case, a swap partition or swap file/raid file or even priority set swap may be needed.

Many people still caution swap on a ssd. Not sure I have an opinion. I might avoid swap.

/home is useful for backup

/boot may be needed on larger disk's.

/var is traditional but I rarely use it in home setup.

At one time a thought of partitions as speeding up access was pretty common and maybe some valid tests. It sill might prove true but on a ssd, not sure one could ever measure it.
 
Old 12-26-2013, 01:28 PM   #24
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,860

Rep: Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230Reputation: 2230
Quote:
Originally Posted by jtsn View Post
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.
This is complete garbage.

You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level. For that to be even remotely feasible, the OS would have to understand the manner in which you access your data. For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.
 
Old 12-26-2013, 01:31 PM   #25
273
LQ Addict
 
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680

Rep: Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373Reputation: 2373
Since we're straying into that territory I hope I'm not too off topic to ask this question here:
Am I right in thinking that fragmentation isn't an issue at all on SSDs? I've always assumed it to be the case fmr what I understand but does anyone know for certain?
 
Old 12-26-2013, 05:29 PM   #26
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,564
Blog Entries: 15

Rep: Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118Reputation: 2118
SSDs actually will suffer from fragmentation, but defragmenting them reduces their lifespan by the constant read/writes to the cells. There are other methods of preventing fragmentation with SSDs such as specialized file system journaling techniques that keeps files in their place while caching them to a standard HDD for read/write, then write back to SSD only as needed.
 
1 members found this post helpful.
Old 12-26-2013, 09:13 PM   #27
qweasd
Member
 
Registered: May 2010
Posts: 621

Rep: Reputation: Disabled
Quote:
Originally Posted by 74razor View Post
Just curious, I plan on making a swap partition and /, /home, and /var on separate partitions. I've went through the wiki but I don't see much as far as recommended sizes. I have a 160 GB SSD.
You don't have a lot of space, and chopping always wastes space. You will waste 15 GiB or so on just the root (because if you don't, you will risk hitting the cap). A few more GiB on /var. You can waste any amount of space on /tmp, if you elect to have one. So I would recommend two partitions: swap (1.1 RAM size if you want to hibernate, and from 0 to 0.5 if you don't); and the rest for the root. Just monitor the available disk space (which you should do anyway), and you will be fine.
 
1 members found this post helpful.
Old 12-26-2013, 10:14 PM   #28
jtsn
Member
 
Registered: Sep 2011
Posts: 925

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
Quote:
Originally Posted by ReaperX7 View Post
Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.
Nothing must be defragmented at all. That's the whole point of having a filesystem more advanced than NTFS or HFS+.
 
1 members found this post helpful.
Old 12-26-2013, 10:32 PM   #29
jtsn
Member
 
Registered: Sep 2011
Posts: 925

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
Quote:
Originally Posted by Richard Cranium View Post
This is complete garbage.

You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level.
It only changes file fragment locations on write accesses to that files. So after using a filesystem for a while, the on-disk result of a good designed filesystem is the optimum for that specific use pattern, while the performance of a bad filesystem (like NTFS or HFS+) gets worse on every write access.

Technology, that's advanced enough, that you don't understand it, may look like magic to you. Again, Unix filesystems fragment files on purpose. There is no point in defragmenting them nor it is possible for file sizes above 8 MB.

Quote:
For that to be even remotely feasible, the OS would have to understand the manner in which you access your data.
And that's what the OS does. But it is not watching you as an egoistic user. The goal is to maximize overall throughput for all users and processes.

Quote:
For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.
No, as a multi-tasking OS you don't want anything about a single file. You want the the files accessed in parallel together in the same block group. You want be able to do elevator seeks to reduce latency, so you want evenly filled block groups, so bigger files get fragmented and scattered over the block groups on purpose. All that requirements are accommodated even by a simple Linux filesystem like ext2.

Sorry, but PC magazine folklore from the DOS stone age doesn't apply here.

Last edited by jtsn; 12-26-2013 at 10:43 PM.
 
1 members found this post helpful.
Old 12-26-2013, 11:02 PM   #30
jtsn
Member
 
Registered: Sep 2011
Posts: 925

Rep: Reputation: 483Reputation: 483Reputation: 483Reputation: 483Reputation: 483
Quote:
Originally Posted by ttk View Post
How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.
Actually, on Unix with a correct designed memory manager (like Solaris) you want swap to make optimal use of the installed memory. Otherwise it stays mostly unused.

Quote:
Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong.
The sole existence of the OOM is a hint, that something is very wrong with the virtual memory manager of Linux: It is giving out memory to applications, that it just doesn't have (neither in RAM nor swap) and crashes them, if they start to use it...

Quote:
If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap.
Swap doesn't mean anything slows to crawl, just have a look at this OpenWRT system:

Code:
$ free
             total         used         free       shared      buffers
Mem:         29212        26696         2516            0         1472
-/+ buffers:              25224         3988
Swap:       262140         6844       255296
It's just running fine, nothing crawls. Having swap frees up some memory, so the performance is actually better than without it. Of course, there is no heavy swap activity, just unused stuff paged out.

Quote:
OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have,
If an user application can use more memory than installed in a machine, the system administrator forgot to set the resource limits correctly.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Recommended Partition Sizes Zaileion Linux - Newbie 13 11-19-2013 07:55 PM
Recommended filesystems sizes ? Vilius Red Hat 5 05-15-2008 08:41 AM
Recommended filesystem sizes using LVM ? Vilius Linux - Software 3 05-15-2008 07:48 AM
Recommended Partition Sizes vmanivan Slackware - Installation 4 08-17-2005 05:53 AM
recommended partitions/sizes for manual part. redhat install. iLLuSionZ Linux - Newbie 15 11-15-2003 06:37 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 07:02 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration