Quote:
|
Quote:
Quote:
Quote:
*) Files above 8 MB can't even be stored in a single fragment. |
Quote:
Ramfs won't use swap ever, it completely blocks the occupied memory. |
Thanks, god to know.
|
Quote:
|
I haven't used a swap partition in years. If you have enough RAM to facilitate installation (used to be 1GB, but that might be stale), you can get through it with no swap, and install swap files after installation.
How much swap you need depends entirely on the applications you expect to run, and how you want your system to behave. Some people prefer to not install any swap ever, and have the OOM killer shoot processes in the head when memory is depleted. They consider this preferable to a misbehaving process thrashing disk and bringing the entire system to its knees. The presumption here, of course, is that memory will only be depleted when something is terribly wrong. This goes back to the question of which applications you expect to run. If you expect memory requirements to exceed memory capacity, and don't care if the application slows to a crawl as long as it completes eventually, then you want to install swap. For instance, if you need to open very large excel spreadsheets in OpenOffice, this can consume many gigabytes of memory. If that's more memory than you have, then that sucks, but needs to get done. If you let firefox run for weeks (like I do), you can depend on swap to alert you to when you've hit the limits of your memory, and tell firefox to shutdown and restart gracefully. It totally depends on what you need to do. My usual mode of operation is to use no swap partition, then create a swapfile equal to memory size in /, activating it in /etc/rc.d/rc.local like so: Code:
find /swapfile.? -exec swapon {} \; Code:
dd if=/dev/zero of=/swapfile.3 bs=65536 count=262144 The line in rc.local assures that all such swapfiles will be activated next time the system boots, too. I've creates swapfiles 10x or more the size of physical memory, for accommodating piggish datamining processes, but usually 2x to 2.5x physical memory size is all a desktop needs. |
Quote:
Plus defragmentation on some file systems requires the file system be taken offline and dismounted. Often it's just best to run a simple file system integrity check and check for errors and fix as needed. If you need to defragment, only defragment the user files. |
There really isn't a swap file formula anymore. Since ram is so cheap, one can decide if they want or need swap. Swap is mandatory of you need hibernate. Otherwise, you use your tested ram use to decide or expected load to decide. In either case, a swap partition or swap file/raid file or even priority set swap may be needed.
Many people still caution swap on a ssd. Not sure I have an opinion. I might avoid swap. /home is useful for backup /boot may be needed on larger disk's. /var is traditional but I rarely use it in home setup. At one time a thought of partitions as speeding up access was pretty common and maybe some valid tests. It sill might prove true but on a ssd, not sure one could ever measure it. |
Quote:
You appear to be implying that the OS will move file segments around on the hard drive under your very nose in order to arrive at some magical fragmentation level. For that to be even remotely feasible, the OS would have to understand the manner in which you access your data. For standard hard drives, you just want as many bytes of a file on the same cylinder as possible to reduce head movement while reading data. |
Since we're straying into that territory I hope I'm not too off topic to ask this question here:
Am I right in thinking that fragmentation isn't an issue at all on SSDs? I've always assumed it to be the case fmr what I understand but does anyone know for certain? |
SSDs actually will suffer from fragmentation, but defragmenting them reduces their lifespan by the constant read/writes to the cells. There are other methods of preventing fragmentation with SSDs such as specialized file system journaling techniques that keeps files in their place while caching them to a standard HDD for read/write, then write back to SSD only as needed.
|
Quote:
|
Quote:
|
Quote:
Technology, that's advanced enough, that you don't understand it, may look like magic to you. ;) Again, Unix filesystems fragment files on purpose. There is no point in defragmenting them nor it is possible for file sizes above 8 MB. Quote:
Quote:
Sorry, but PC magazine folklore from the DOS stone age doesn't apply here. |
Quote:
Quote:
Quote:
Code:
$ free Quote:
|
All times are GMT -5. The time now is 07:06 PM. |