Slackware This Forum is for the discussion of Slackware Linux.
|
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
|
12-25-2013, 09:21 PM
|
#16
|
LQ Addict
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
|
Quote:
Originally Posted by 273
Thanks for the clarification, since I don't have swap I feel better using tmpfs.
|
Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever? If so that sounds like a better idea to me if using an SSD as you can flood your memory then use swap for processes whilst you work out which process to kill to regain your RAM.
|
|
|
12-25-2013, 09:34 PM
|
#17
|
Member
Registered: Sep 2011
Posts: 925
|
Quote:
Originally Posted by mlslk31
They're still there, just not as obvious as it is on Windows. After a while, though, a speed gain can still be had from doing a backup/zero/format/restore cycle.
|
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.
Quote:
That "while" can be a year or more, though, if I manage major deletes responsibly.
|
So you actually delete and rebuild your filesystems after they reached their optimal configuration?
Quote:
Some filesystems (XFS, ext4, and btrfs) have defragmentation tools, but some filesystems (JFS, NILFS2, F2FS, and some more) don't have defragmentation tools.
|
You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.
*) Files above 8 MB can't even be stored in a single fragment.
|
|
|
12-25-2013, 09:39 PM
|
#18
|
Member
Registered: Sep 2011
Posts: 925
|
Quote:
Originally Posted by 273
Ah, sorry, does that mean that tmpfs will use swap if RAM is full? Doesn't that mean if your RAM is generally full it is the same as using your HDD? Also, will ramfs not use swap ever?
|
Tmpfs will use RAM occupied by the buffer cache without writing its content out to disk, because its temporary anyway. But it gives the kernel the option to use swap in a low-memory situation. Another good reason for tmpfs is its built-in size limit (defaults to half RAM size).
Ramfs won't use swap ever, it completely blocks the occupied memory.
|
|
1 members found this post helpful.
|
12-25-2013, 09:45 PM
|
#19
|
LQ Addict
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
|
Thanks, god to know.
|
|
|
12-25-2013, 10:01 PM
|
#20
|
Member
Registered: Mar 2013
Location: Florida, USA
Distribution: Slackware, FreeBSD
Posts: 210
Rep:
|
Quote:
Originally Posted by jtsn
So you actually delete and rebuild your filesystems after they reached their optimal configuration?
|
Sometimes, sometimes not. If I had one of my adventures where I just wanted to compile one non-Slackware thing, and it asked me to recompile half the system and add 35 prerequisites, I'll probably back up and restore once everything is deemed to be *perfect* and stay perfect for a month. For a system that's been running perfect for a year, though, I'll probably just run defrag and back it up, maybe even restore to another partition for use as an alternate/rescue system.
|
|
|
12-26-2013, 02:38 AM
|
#21
|
Senior Member
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,039
|
I haven't used a swap partition in years. If you have enough RAM to facilitate installation (used to be 1GB, but that might be stale), you can get through it with no swap, and install swap files after installation.
How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.
Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong. This goes back to the question of which applications you expect to run.
If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap. For instance, if you need to open very large excel spreadsheets in OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have, then that sucks, but needs to get done. If you let firefox run for weeks (like I do), you can depend on swap to alert you to when you've hit the limits of your memory, and tell firefox to shutdown and restart gracefully. It totally depends on what you need to do.
My usual mode of operation is to use no swap partition, then create a swapfile equal to memory size in /, activating it in /etc/rc.d/rc.local like so:
Code:
find /swapfile.? -exec swapon {} \;
.. and then create more swapfiles as needed, before running OpenOffice or whatever, like so:
Code:
dd if=/dev/zero of=/swapfile.3 bs=65536 count=262144
chmod 600 /swapfile.3
mkswap /swapfile.3
swapon /swapfile.3
(to create, format, and activate a 16GB swapfile).
The line in rc.local assures that all such swapfiles will be activated next time the system boots, too.
I've creates swapfiles 10x or more the size of physical memory, for accommodating piggish datamining processes, but usually 2x to 2.5x physical memory size is all a desktop needs.
|
|
1 members found this post helpful.
|
12-26-2013, 03:39 AM
|
#22
|
LQ Guru
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,564
|
Quote:
Originally Posted by jtsn
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.
So you actually delete and rebuild your filesystems after they reached their optimal configuration?
You don't want defragmentation tools, because Unix-like filesystems (like Ext2/4, XFS, UFS) actually do fragment files on purpose!*) They do it for reaching the goals described above: maximum multi-tasking/multi-user throughput.
*) Files above 8 MB can't even be stored in a single fragment.
|
Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.
Plus defragmentation on some file systems requires the file system be taken offline and dismounted.
Often it's just best to run a simple file system integrity check and check for errors and fix as needed. If you need to defragment, only defragment the user files.
|
|
|
12-26-2013, 10:30 AM
|
#23
|
Moderator
Registered: Mar 2008
Posts: 22,228
|
There really isn't a swap file formula anymore. Since ram is so cheap, one can decide if they want or need swap. Swap is mandatory of you need hibernate. Otherwise, you use your tested ram use to decide or expected load to decide. In either case, a swap partition or swap file/raid file or even priority set swap may be needed.
Many people still caution swap on a ssd. Not sure I have an opinion. I might avoid swap.
/home is useful for backup
/boot may be needed on larger disk's.
/var is traditional but I rarely use it in home setup.
At one time a thought of partitions as speeding up access was pretty common and maybe some valid tests. It sill might prove true but on a ssd, not sure one could ever measure it.
|
|
|
12-26-2013, 01:28 PM
|
#24
|
Senior Member
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,860
|
Quote:
Originally Posted by jtsn
Of course, you can get some temporary speed gain on single-tasking, but on Unix we want multi-tasking/multi-user throughput and its filesystems are optimized for that. So after a while every filesystem stabilizes on a specific fragmentation configuration for optimal throughput and stays there.
|
This is complete garbage.
You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level. For that to be even remotely feasible, the OS would have to understand the manner in which you access your data. For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.
|
|
|
12-26-2013, 01:31 PM
|
#25
|
LQ Addict
Registered: Dec 2011
Location: UK
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
|
Since we're straying into that territory I hope I'm not too off topic to ask this question here:
Am I right in thinking that fragmentation isn't an issue at all on SSDs? I've always assumed it to be the case fmr what I understand but does anyone know for certain?
|
|
|
12-26-2013, 05:29 PM
|
#26
|
LQ Guru
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,564
|
SSDs actually will suffer from fragmentation, but defragmenting them reduces their lifespan by the constant read/writes to the cells. There are other methods of preventing fragmentation with SSDs such as specialized file system journaling techniques that keeps files in their place while caching them to a standard HDD for read/write, then write back to SSD only as needed.
|
|
1 members found this post helpful.
|
12-26-2013, 09:13 PM
|
#27
|
Member
Registered: May 2010
Posts: 621
Rep:
|
Quote:
Originally Posted by 74razor
Just curious, I plan on making a swap partition and /, /home, and /var on separate partitions. I've went through the wiki but I don't see much as far as recommended sizes. I have a 160 GB SSD.
|
You don't have a lot of space, and chopping always wastes space. You will waste 15 GiB or so on just the root (because if you don't, you will risk hitting the cap). A few more GiB on /var. You can waste any amount of space on /tmp, if you elect to have one. So I would recommend two partitions: swap (1.1 RAM size if you want to hibernate, and from 0 to 0.5 if you don't); and the rest for the root. Just monitor the available disk space (which you should do anyway), and you will be fine.
|
|
1 members found this post helpful.
|
12-26-2013, 10:14 PM
|
#28
|
Member
Registered: Sep 2011
Posts: 925
|
Quote:
Originally Posted by ReaperX7
Defragmentation is only useful when files start to suffer from read/write errors, but you have to know what to defragment. User space files like those in /home and /root(admin-user) are what should be defragmented but core system files in /(root) and /usr libraries and applications shouldn't require defragmentation regularly unless you're preparing for a major system update and want files in a singular space, if at all.
|
Nothing must be defragmented at all. That's the whole point of having a filesystem more advanced than NTFS or HFS+.
|
|
1 members found this post helpful.
|
12-26-2013, 10:32 PM
|
#29
|
Member
Registered: Sep 2011
Posts: 925
|
Quote:
Originally Posted by Richard Cranium
This is complete garbage.
You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level.
|
It only changes file fragment locations on write accesses to that files. So after using a filesystem for a while, the on-disk result of a good designed filesystem is the optimum for that specific use pattern, while the performance of a bad filesystem (like NTFS or HFS+) gets worse on every write access.
Technology, that's advanced enough, that you don't understand it, may look like magic to you. Again, Unix filesystems fragment files on purpose. There is no point in defragmenting them nor it is possible for file sizes above 8 MB.
Quote:
For that to be even remotely feasible, the OS would have to understand the manner in which you access your data.
|
And that's what the OS does. But it is not watching you as an egoistic user. The goal is to maximize overall throughput for all users and processes.
Quote:
For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data.
|
No, as a multi-tasking OS you don't want anything about a single file. You want the the files accessed in parallel together in the same block group. You want be able to do elevator seeks to reduce latency, so you want evenly filled block groups, so bigger files get fragmented and scattered over the block groups on purpose. All that requirements are accommodated even by a simple Linux filesystem like ext2.
Sorry, but PC magazine folklore from the DOS stone age doesn't apply here.
Last edited by jtsn; 12-26-2013 at 10:43 PM.
|
|
1 members found this post helpful.
|
12-26-2013, 11:02 PM
|
#30
|
Member
Registered: Sep 2011
Posts: 925
|
Quote:
Originally Posted by ttk
How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave.
|
Actually, on Unix with a correct designed memory manager (like Solaris) you want swap to make optimal use of the installed memory. Otherwise it stays mostly unused.
Quote:
Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong.
|
The sole existence of the OOM is a hint, that something is very wrong with the virtual memory manager of Linux: It is giving out memory to applications, that it just doesn't have (neither in RAM nor swap) and crashes them, if they start to use it...
Quote:
If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap.
|
Swap doesn't mean anything slows to crawl, just have a look at this OpenWRT system:
Code:
$ free
total used free shared buffers
Mem: 29212 26696 2516 0 1472
-/+ buffers: 25224 3988
Swap: 262140 6844 255296
It's just running fine, nothing crawls. Having swap frees up some memory, so the performance is actually better than without it. Of course, there is no heavy swap activity, just unused stuff paged out.
Quote:
OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have,
|
If an user application can use more memory than installed in a machine, the system administrator forgot to set the resource limits correctly.
|
|
|
All times are GMT -5. The time now is 07:02 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|