Sequence of partitions / mount points
Hi, I'm wrecking my brain to figure out the best partition layout for my linux installation which I'm about to have on my laptop. Having read numerous articles on partitioning in linux I've gathered some ideas, still there was no let's say a clear explanation as to the sequence the mount points should be arranged on the disc...What I have in mind is to use a single disc space as efficiently as possible considering the head travel. The pc is a laptop, 160GB HDD and will be used as a normal desktop with some simple sound processing. Distro Linux Mint 10. I'm planning to have such partitions and all will come after a Win7 installation:
/boot -> some write it's not necessary in dual-booting, some that it's good to have for security swap -> with 4GB of RAM i don't suppose i'll use it / /usr -> programmes are there, considerable usage? /temp /var /opt /home -> the largest one at the end My idea is to have the most heavily utilised partitions close to each other so the head doesn't move for large distances. The placement also makes a difference as the closer to the inner rim of the disc the worse performance. I'm also not sure about the sizes. Read posts with recommendations but still judging by installations on a different laptop and virtual machine e.g. 5GB for /opt is a bit too much as there's almost nothing in there. Certainly /usr fills up, /var too from what I've observed. / also has scarce data in it so I'm wondering if giving them e.g. 5 gigs each won't be a waste of space resulting in greater head travel.:confused: Any ideas most welcome! :hattip: |
Wow! I think that you're going a wee bit too deep. The guys who put distributions together have probably thought about all this so assuming that you've got your Windows 7 partition and another free one on the disk I'd do the install and when it comes to partitioning say "use all the free partition" If you must customise, I'd only have three partitions; /, /home and swap where / would also contain /usr, /etc, /bin, etc.
Suck it and see, if you don't like it you can always re-install. :D Play Bonny! :hattip: |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
/home -> the largest one at the end Quote:
|
Quote:
|
Quote:
|
Personally, I would place stuff handled by the native package manager in top level '/usr' and stuff that is not handled by the native package manager (i.e. hand compiled or handled by a secondary manager, e.g. paco) in '/usr/local' as long as it conforms to the rest of the standard layout (e.g. having bin/, lib/, share/, etc. sub directories), and (my own personal criteria) has a good method to uninstall. This is how the FHS seems to imply it should be used:
http://www.pathname.com/fhs/pub/fhs-...LOCALHIERARCHY "The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr." Obviously 'system software' would be stuff installed by the native package manager. For me, Opera (where no native package is available) falls into this camp ( and yes it is shareable). The other advantage of installing into /usr/local as opposed to /opt is that on most distros Opera will be full integrated with no further effort by the user. By that I mean you will not have to adjust $PATH, $MANPATH, XDG settings, etc. for full system integration. You can just run Opera, use 'man opera' or get desktop integration (Desktop Environment shortcuts, MIME setup, etc.) with no extra work. I would only use '/opt' where a package does not conform (often the case with apps ported from other OSes) and/or is large and unwieldy to remove from '/usr/local'. |
Quote:
Quote:
Code:
$ equery f adobe-flash Again, as said above, I am merely describing a thing that happens, I didn't invent the standard, and I didn't invent Gentoo nor any other distro either. This is deviating from the original purpose of the thread anyway. I only wanted to give my opinion that using a separate partition for /opt is not smart. Also, and going back to the FHS, the points about shareability and fs isolation are not too relevant for a desktop user, as it's mounting /usr "ro". |
Quote:
Quote:
Quote:
/tmp is more efficient as a tmpfs rather than as an ordinary partition. Doing /tmp that way may increase the need for swap. Having some swap anyway is a good idea. Plan for both when you decide how much swap to allocate. Quote:
Your available disk space is low enough that you ought to give some thought to correctly sizing swap (including /tmp as a tmpfs). On a desktop system with a big disk, it is easier to just give swap more than it ever might need and not worry that doing so wastes a couple GB of disk. In your case, you probably want a more accurate estimate of /tmp needs in order to size swap. But for /boot, /usr, /home, /var, etc. even with a lot of effort toward good estimates of the required size you would still be wasting more space than you should for no actual benefit. |
Quote:
|
thanks for your insights. It seems more reasonable to have a simplier filesystem for normal desktop use. On the other hand the idea of the filesystem spread across +120GB is somewhat questionable to me...
The concept of shortening seek time is worth considering, especially in dual HDD configurations. Having 2 spindles reading / writing the requested data at the same time would be definitely advantageous. I'm wondering how would this be achieved in linux. NTFS by default tries to squeeze data as close to the start of a volume as possible thus creating fragmentation problems. Correct me if I'm wrong, but still, defragmented data should be better accessible with shorter seek time on an NTFS volume. From what I've read a linux filesystem will try to spread data evenly throughout the volume hence leaving a considerable amount of space for files to grow or be easily moved, preventing fragmentation. In this case if I have a defragmented 30GB NTFS partiton filled with 5GB of data the head will call whatever it needs from the beginning of the partition finding the placement of files in the MFT (also at the beginning of a volume). If this were linux wouldn't this data be spread throughout the whole partition? If so the head would need much more seek time to call files which are far from each other. source: http://geekblog.oneandoneis2.org/ind..._defragmenting "A linux file system scatters files all over the disc so there's plenty of free space if the file's size changes. It can also re-arrange files on-the-fly, since it has plenty of empty space to shuffle around. Defragging the first type of filesystem is a more intensive process and not really practical to run during normal use. The cleverness of this approach is that the disk's stylus can sit in the middle, and most files, on average, will be fairly nearby." The above article states that it's beneficial that linux spreads the files more or less "in the middle" so the head can retrieve them starting from the middle on average. What about if we have 100GB of space for this retrieval? Won't it be too far apart so to speak for efficient seeking? I imagine the head running across the whole 100GB chunk of the platter in a fairly random logic. I know that the information about the placement is located in something called a "superblock" but still it's a mistery for me how would scattering files across a large space be beneficial speedwise? It's definitely a good idea to divide the system partition from programmes in Windows and put on separate disks,preferably at the front. Big gain from: a) smaller load for the system partition b) 2 discs working simultaneously when calling files which reduces latency c) utilising the outer zone which has best performance d) easier maintenance of a divided filesystem Would there be a similar benefit from separating / and /usr (maybe some other) and putting on 2 different disks? Doing this the filesystem, data and programmes could be read independently to a certain extent increasing access times. I'm not taking into account some special needs resulting from e.g. a dedicated mail server or sth similar. HDD performance only in more or less "standard" operation. Regarding the swap partition I incline towards what i92guboy said. When I tested the system by opening all apps I have (no VM ware though) it wasn't even close to the amount of ram and swap was on 0% usage. Maybe a file which I can modify in size is more beneficial unless someone uses vmware or video / sound processing a lot. Long post :) If anyone has some experience with spreading mount points over multiple discs please write.:) |
Quote:
Quote:
Quote:
I don't say that more exotic layouts don't have a use, I just think that the hassle and limitations that such a scheme puts on a desktop are not worth the trouble. But, it's just my opinion, of course. It might even change tomorrow ;) You will probably have to test and decide yourself. :) |
All times are GMT -5. The time now is 03:31 AM. |