LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices

Reply
 
Search this Thread
Old 08-11-2006, 02:37 PM   #1
superluser
LQ Newbie
 
Registered: Aug 2006
Location: Rochester, NY USA
Distribution: PLD
Posts: 10

Rep: Reputation: 0
Hard drive for swap/tmp/var?


I'm building a new system, and I'm thinking of having a dedicated drive for the highly-rewritten sections of a drive.

That is, partitions for /tmp, /var, and swamp[*]. What should I look for in a hard drive for this? Are there any hard drive types/manufacturers that are better for this? And how large should these partitions be, especially given that this is going to be on a modern hard drive[+]?

I'm planning on starting with 2GB RAM, and may go up to ~4 (my motherboard will only support what it refers to as 3+ GB), on an AMD64.

[*] That was a typo, but it was too good to pass up.
[+] The smallest HD sold at (e.g.) Best Buy is 40GB. I suspect that it might be hard to find a hard drive smaller than, say, 20GB.
 
Old 08-11-2006, 02:58 PM   #2
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 4.0 Etch
Posts: 1,349

Rep: Reputation: 49
Since you have a lot of RAM, then hopefully swap won't be used much. You can get a nice boost in performance by avoiding the hard drive altogether for /tmp, /var/tmp, /var/run, and /var/lock. By putting these in tmpfs ramdrives, the hard drive won't be used at all unless those temporary files take up a lot of space (in which case the spillover will get offloaded to swap).

You can make ramdisk entries in your fstab like this:
Code:
none  /tmp       tmpfs  defaults  0  0
none  /var/run   tmpfs  defaults  0  0
none  /var/lock  tmpfs  defaults  0  0
none  /var/tmp   tmpfs  defaults  0  0
Note that tmpfs ramdrives inherently start off completely empty. Any data within a tmpfs file system is lost when rebooting. Thus, it should only be used for directories where keeping data between reboots is not necessary (or even desirable).
 
Old 08-11-2006, 03:43 PM   #3
superluser
LQ Newbie
 
Registered: Aug 2006
Location: Rochester, NY USA
Distribution: PLD
Posts: 10

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by IsaacKuo
You can get a nice boost in performance by avoiding the hard drive altogether for /tmp, /var/tmp, /var/run, and /var/lock.
But what about /var/spool, /var/log, and the rest?
 
Old 08-11-2006, 03:59 PM   #4
rickh
Senior Member
 
Registered: May 2004
Location: Albuquerque, NM USA
Distribution: Debian-Lenny/Sid 32/64 Desktop: Generic AMD64-EVGA 680i Laptop: Generic Intel SIS-AC97
Posts: 4,250

Rep: Reputation: 60
I don't know what you expect to gain by putting those files on a separate drive. separate partition, yes. Maybe I don't understand some tecnicality, though.

After considerable experimentation, using Debian, I now format my 80 GB disks like this. All on separate partitions. I don't think that interferes with IsaacKuo's suggestion above. You could still edit fstab to handle those specific directories.

Code:
/       1 GB
/swap   1 GB
/tmp    1 GB
/usr    5 GB
/var    3 GB
/home  69 GB
 
Old 08-11-2006, 05:18 PM   #5
superluser
LQ Newbie
 
Registered: Aug 2006
Location: Rochester, NY USA
Distribution: PLD
Posts: 10

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by rickh
I don't know what you expect to gain by putting those files on a separate drive. separate partition, yes. Maybe I don't understand some tecnicality, though.
swamp, /var, and /tmp are high-traffic areas. If you get bad sectors, that's where you're going to get them. The points are that 1.) these might do well to be on a higher-quality drive, and 2.) it would be better to have all the bad sectors on one drive so that you don't wind up with an essential drive that has large sections that are practically unusable.
 
Old 08-11-2006, 05:34 PM   #6
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 4.0 Etch
Posts: 1,349

Rep: Reputation: 49
AFAIK, bad sectors can appear anywhere at any time, often due to overheating. To maximize hard drive reliability, the most important thing is keeping its temperature low. Multiple drives can actually harm reliability by restricting airflow and/or generating more heat.

If you're concerned about hard drive reliability, the best thing is to make sure the hard drive is getting good cool airflow.
 
Old 08-11-2006, 06:36 PM   #7
benjithegreat98
Senior Member
 
Registered: Dec 2003
Location: Shelbyville, TN, USA
Distribution: Fedora Core, CentOS
Posts: 1,019

Rep: Reputation: 45
If you are really trying to get every ounce of performance out of your harddrive you should first look at things like seek time and rpms. That will boost the performance more than anything.

Something else you can do is consider the shape of the harddrive itself. In theory if data is closer to the center of the drive it has a higher chance of getting to the heads faster. So how do you get data to stay towards the center? Well, the harddrive begins on the outside of the drive. This is opposite of a CD-ROM which begins on the inside. This means that according to what you think is written to the most (which is debatable) it should be the partitions toward the end of the partion table.

You say swap is written to more than other? If you are really gonna have 2 GB of Ram, do you really expect to use that much ram? I've got 768MB and I rarely, if ever, use the swap partition..... /var/? Depends what you're doing. Serving a lot of web pages from var? Ok. Most of your programs and librarys probably load out of /usr/. I would put home and usr and maybe tmp as the highest used (not in that order).
 
Old 08-11-2006, 10:32 PM   #8
superluser
LQ Newbie
 
Registered: Aug 2006
Location: Rochester, NY USA
Distribution: PLD
Posts: 10

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by benjithegreat98
If you are really trying to get every ounce of performance out of your harddrive you should first look at things like seek time and rpms. That will boost the performance more than anything.
Well, speed isn't the issue. Reliability is.

For example, if I wanted a faster drive, I could get one of the new ones that has a flash RAM cache. But that wouldn't be good, because the flash has something like 100,000 erase cycles per block. This is more than CD-RWs, for example, but still less than a decent hard drive.

Let me also be clear about one other point. The issue is not how many times it is read, but how many times it is erased (or altered).

Quote:
Originally Posted by benjithegreat98
You say swap is written to more than other? If you are really gonna have 2 GB of Ram, do you really expect to use that much ram? I've got 768MB and I rarely, if ever, use the swap partition..... /var/? Depends what you're doing. Serving a lot of web pages from var? Ok. Most of your programs and librarys probably load out of /usr/. I would put home and usr and maybe tmp as the highest used (not in that order).
As I say, failure doesn't come from read cycles, but from write cycles. You don't typically alter the web pages that you are hosting on a minute-to-minute basis, do you? Looking at some of my logs, they're updated every minute or so.

I'm looking for some advice on how to find a drive that has enough resilience for highly-rewritten partitions.
 
Old 08-12-2006, 01:17 AM   #9
benjithegreat98
Senior Member
 
Registered: Dec 2003
Location: Shelbyville, TN, USA
Distribution: Fedora Core, CentOS
Posts: 1,019

Rep: Reputation: 45
Quote:
As I say, failure doesn't come from read cycles, but from write cycles.
Are you so sure about that? I thought it was head crashes, which can happen independantly of read or write cycles, because the platters are never physically touched during a read or write sequence. If you have a source to back that up I'll be willing to agree.

And if that were true then wouldn't something like your file allocation table get corrupted often because of errors from constantly being written to in such a small space?
 
Old 08-12-2006, 08:55 AM   #10
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 4.0 Etch
Posts: 1,349

Rep: Reputation: 49
Flash memory can fail due to the write cycles, but not magnetic hard drives.

The original poster may not be aware that by default, POSIX file systems (including ext3) write to the disc for every read access, and journaled file systems like ext3 constantly write to the disc even with the POSIX-compliant "atime" feature turned off. By default Linux is CONSTANTLY writing to all mounted ext3 partitions!

This is why you have to be careful how you mount flash memory. For removeable thumbdrives and digicam memory cards this isn't a problem, because they're usually formated in FAT--a file system which is neither journaled nor POSIX compliant. With FAT, the only reason to write data to the partition is if a file is actually being created/deleted/modified.
 
Old 08-12-2006, 04:08 PM   #11
superluser
LQ Newbie
 
Registered: Aug 2006
Location: Rochester, NY USA
Distribution: PLD
Posts: 10

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by IsaacKuo
Flash memory can fail due to the write cycles, but not magnetic hard drives.

The original poster may not be aware that by default, POSIX file systems (including ext3) write to the disc for every read access, and journaled file systems like ext3 constantly write to the disc even with the POSIX-compliant "atime" feature turned off. By default Linux is CONSTANTLY writing to all mounted ext3 partitions!
Huh. I could have sworn that that was one of the main factors that would cause wear and tear on your disk (bad block here or there, that kind of thing). I do know that you are constantly (well, occasionally, but at a constant rate) losing blocks to wear and tear and that hard drive manufacturers put in extra sectors to compensate.

Ah, well. So how large should these partitions be? I'm uneasy about having a system with no swamp, and what should I allocate for /tmp and /var?
 
Old 08-12-2006, 06:21 PM   #12
J.W.
LQ Veteran
 
Registered: Mar 2003
Location: Milwaukee, WI
Distribution: Mint
Posts: 6,642

Rep: Reputation: 69
Flash drives do have a ceiling on how many times they can be safely re-written, but at least as I see it regular hard drives do not. Hard drives were *designed* to allow their contents to be changed at will, and all hard drives are subject to failure over time, but only because that's just the nature of hardware. Nothing lasts forever. Regarding the comment that "ext3 systems are constantly writing to the disk", I'm not sure that's accurate. Ext3 filesystem do record the timestamp of the most recent access (which has to be stored somewhere), but that data will only get written once the write buffer gets filled. So, Yes, in a certain sense, information is always being saved to the disk, but literally, No, the disk is not in a state where it is performing a Write operation 24/7.

Anyway, with regards to your "swamp" (heh, good one) partition, with 2G of RAM, it's unlikely that you'd ever really have much need for the swamp. Give it 256Mg or 512Mg, but otherwise you're just wasting disk space. Swamps only are needed when the load on the system exceeds your physical RAM, and the system is forced to write memory pages out to disk. The more RAM you have, the less likely that scenario becomes. Personally, my guideline is that if you have at least 256Mg RAM, a 256Mg swamp is all you need, because most typical desktop systems do not exhaust all 256Mg RAM. The ancient "swamp = 2X RAM" recommendation dates from the late 90's, where having 8Mg or 16Mg RAM was considered pretty respectable, and it was relatively easy for your system load to consume 100% of RAM, thus making swamp a critical consideration. Most home PC's today are more orders of magnitude more powerful than what their owners use them for (eg, nobody really needs 1G of RAM and a 3Gz CPU if they only use their PC to check Email, visit MySpace, and listen to their mp3 collection) and as a result the importance of swamp has diminished.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
kde, /tmp, /var/tmp and all that garba Linux - Software 4 06-17-2005 01:31 PM
Hard Drive Swap Challenge Peterius Linux - Hardware 1 10-17-2004 11:42 AM
new hard drive, grub and swap? freychef Linux - Hardware 3 01-26-2004 09:55 AM
Newbie question - /tmp /var/tmp Mr happy Linux - Security 3 01-27-2003 02:03 PM
Can I do this? Hard drive swap. bulliver Linux - Hardware 7 12-26-2002 02:54 PM


All times are GMT -5. The time now is 12:06 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration