I reserve about 30GB for /
Definitely enough for a full install and room for any other installations I want. I put the rest of the disk ~500GB for /home/ I don't use a SWAP because I have enough memory (4GB) to work with. And I have my /tmp/ connect to a tmpfs, it resets every restart of the computer. |
I would also say that swap is optional with 4 GB of RAM, unless you want to suspend to disk. I only have 2 GB of RAM and I don't have a swap partition, and everything works fine. If the system runs out of RAM the OOM killer kills the offending program and the system comes back to normal fast. If I had a swap partition it would start swapping like crazy and the system would hang for a long time.
|
Quote:
Of course it depends on the applications you are running if that ever happens, but I usually set up 1GB swap-space even on systems with a lot of RAM. 1GB does not hurt with todays large disk drives and you are on the safe side. |
I usually have double ram in swap. I am very capable of making memory leaks when programming in c. I have separate \boot because I prefer to have the boot loader switch to Linux asap and boot loader partition corruption won't effect my \ partition.
I don't have raid, but I may have it in my (custom built) computer. I prefer the duplicate raid (0?). What file system is better for rapid file creation and deletion? EDIT: @WiseDraco: I meant only one partition on disk. No swap or anything else. Makes recovery a b**ch. |
Let's look at it this way ...
You do not want to lose any of your personal files (mp3, jpg, docs, etc etc) - everything else is replaceable at little or no cost (timewise). The problem with personal files (at least if they are in your $HOME) is that they are severely corrupted by a proliferation of dot-files; so personally, I keep my personal files away from $HOME - ie on a separate partition. Everything else is recoverable. This leads me to the following layout (but bear in mind that I always have more than one distro on each computer - more typically, 3!) This would lead to the following partition layout: Code:
#1: 100 Mb, ext2 (for legacy grub which I use to boot all others by chainloading) This way - absolutely all my data are separated from the OS and seeing that dot-files/directories seldom are portable across distros (unless, of course, you happen to have the same version of apps and desktops across different distros - in which case I don't see any need for more than one) {rant} Also - each distro is self-contained on one filesystem and it will install its own bootloader on the root filesystem (be it lilo, legacy grub, grub2, whatever). Can you imagine having /var, /home, /boot, /usr/lib etc separate for each distro? Heck, you would soon run out of partitions and it would be a nightmare to support. So - backups are e-a-s-y to do - you really only need to back up /work. To back up the root filesystem of a distro takes wayyyy to much space and time and unless you have gone to extreme length of making it right just for you. In any case - backing up a 'running' filesystem is bad (imho) - you will find that /proc itself takes about 2 gigs if it has been running for any length of time - and /proc gets reset at each boot - so there's a lot of wasted time and space gone allready. {/rant} |
Quote:
|
Quote:
The way _I_ back up is with a script (running via crontab at 1am) which just checks if a certain uuid is present (that of my 2TB usb external drive) and then just mounts the usb (no automount for my part - I want control), does a 'rsync -av [--delete] /work /usb' and then umount the usb - painless ... I still think it is unneccessary (in most cases, but I stand corrected) to back up the OS itself unless there are some really exotic conf-files you absolutely need - or any big downloads (but they could be left under /work/Downloads). YMMV though ... |
I'm very annoyed on how most windows backup software is all or nothing. Windows has a recovery feature (restore points) that works fine. I hate having to want 5 days for the windows directory to finish backing up. /rant
I just make .tar.gz files of whatever directories I label to be backed up. Script with config file. 3 days fighting with life to find time for the necessary research, though. :/ EDIT: As the thread is solved, I guess there really is no topic. lol |
Re windows - I'm certainly no fan *lol* - but ... much in the same way, I still don't believe in backing up the OS itself. Since every computer I have with windows (actually, I have none but my wife has two), I also have at least one linux on those two. When I want to back up, you just boot into linux, mount the windows-partition (eg. on /win) and then back up with 'rsync' of /win/Users/$USER (replace $USER with the windows username) Again, its just good practice to skip the Cache-folders ...
'rsync' - at least to me - is the ultimate backup-tool. Using 'tar' (as so many do) - is like a windows backup - you get it all! 'rsync' only copies files that are new or have been changed! When you have several gigabytes of files to back up - you will soon find the difference between tar and rsync. |
I didn't know that! I'll look up on this. I'll respond when I have a system with which to backup stuff. If the previous time is any example, don't hold your breath.
I'll go with my usual of auto-script+config, though. I like to change directories without touching the script. |
All times are GMT -5. The time now is 11:46 AM. |