LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 01-22-2013, 08:21 AM   #16
rigelan
Member
 
Registered: Jul 2005
Location: Iowa
Distribution: Slackware
Posts: 180

Rep: Reputation: 19

I reserve about 30GB for /

Definitely enough for a full install and room for any other installations I want.

I put the rest of the disk ~500GB for /home/

I don't use a SWAP because I have enough memory (4GB) to work with. And I have my /tmp/ connect to a tmpfs, it resets every restart of the computer.
 
Old 01-22-2013, 08:45 AM   #17
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
I would also say that swap is optional with 4 GB of RAM, unless you want to suspend to disk. I only have 2 GB of RAM and I don't have a swap partition, and everything works fine. If the system runs out of RAM the OOM killer kills the offending program and the system comes back to normal fast. If I had a swap partition it would start swapping like crazy and the system would hang for a long time.
 
1 members found this post helpful.
Old 01-22-2013, 09:05 AM   #18
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by H_TeXMeX_H View Post
If I had a swap partition it would start swapping like crazy and the system would hang for a long time.
But your applications would still be running and you could close some applications to leave more room for the important apps. In opposite, the OOM killer does not know which applications are important for you and closes that one that runs out of memory. Would be a sad thing if this is a long running process and its results are unusable because it ran out of memory and was killed.
Of course it depends on the applications you are running if that ever happens, but I usually set up 1GB swap-space even on systems with a lot of RAM. 1GB does not hurt with todays large disk drives and you are on the safe side.
 
Old 01-22-2013, 09:49 AM   #19
smith517
LQ Newbie
 
Registered: Jan 2013
Distribution: Slackware
Posts: 7

Original Poster
Rep: Reputation: Disabled
I usually have double ram in swap. I am very capable of making memory leaks when programming in c. I have separate \boot because I prefer to have the boot loader switch to Linux asap and boot loader partition corruption won't effect my \ partition.
I don't have raid, but I may have it in my (custom built) computer. I prefer the duplicate raid (0?).

What file system is better for rapid file creation and deletion?

EDIT: @WiseDraco:
I meant only one partition on disk. No swap or anything else. Makes recovery a b**ch.

Last edited by smith517; 01-22-2013 at 09:52 AM. Reason: Forgot to respond to WiseDraco
 
Old 01-23-2013, 03:28 PM   #20
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81
Let's look at it this way ...
You do not want to lose any of your personal files (mp3, jpg, docs, etc etc) - everything else is replaceable at little or no cost (timewise).
The problem with personal files (at least if they are in your $HOME) is that they are severely corrupted by a proliferation of dot-files; so personally, I keep my personal files away from $HOME - ie on a separate partition. Everything else is recoverable. This leads me to the following layout (but bear in mind that I always have more than one distro on each computer - more typically, 3!)
This would lead to the following partition layout:
Code:
#1: 100 Mb, ext2 (for legacy grub which I use to boot all others by chainloading)
#2: =memory (swap)
#3: 20 gigs, linux #1 (ext3, ext4, jfs, reiserfs - depends on distro)
#4: extended - remainder of disk
#5: 20 gigs, linux #2 (ext3, ext4, jfs, reiserfs - depends on distro)
#6: 20 gigs, linux #3 (ext3, ext4, jfs, reiserfs - depends on distro)
#7: remainder of disk, (ext3, ext4, jfs, reiserfs - must be common to all distros), mounted as '/work'
/work have the following directories:./scripts, ./bin, ./tmp, ./Documents, ./Downloads, ./Music etc etc and I delete the same directories in $HOME and make them a link to those under /work.

This way - absolutely all my data are separated from the OS and seeing that dot-files/directories seldom are portable across distros (unless, of course, you happen to have the same version of apps and desktops across different distros - in which case I don't see any need for more than one)

{rant}
Also - each distro is self-contained on one filesystem and it will install its own bootloader on the root filesystem (be it lilo, legacy grub, grub2, whatever). Can you imagine having /var, /home, /boot, /usr/lib etc separate for each distro? Heck, you would soon run out of partitions and it would be a nightmare to support.

So - backups are e-a-s-y to do - you really only need to back up /work. To back up the root filesystem of a distro takes wayyyy to much space and time and unless you have gone to extreme length of making it right just for you. In any case - backing up a 'running' filesystem is bad (imho) - you will find that /proc itself takes about 2 gigs if it has been running for any length of time - and /proc gets reset at each boot - so there's a lot of wasted time and space gone allready.
{/rant}
 
1 members found this post helpful.
Old 01-23-2013, 03:33 PM   #21
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by perbh View Post
To back up the root filesystem of a distro takes wayyyy to much space and time and unless you have gone to extreme length of making it right just for you. In any case - backing up a 'running' filesystem is bad (imho) - you will find that /proc itself takes about 2 gigs if it has been running for any length of time - and /proc gets reset at each boot - so there's a lot of wasted time and space gone allready.
If I backup the complete system I have no problems doing it when the system is running. Both tar and rsync (usually my weapons of choice) have options to exclude directories, so there is absolutely no need to include /proc, /dev, /tmp or /run in your backup.
 
Old 01-23-2013, 08:06 PM   #22
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81
Quote:
Originally Posted by TobiSGD View Post
If I backup the complete system I have no problems doing it when the system is running. Both tar and rsync (usually my weapons of choice) have options to exclude directories, so there is absolutely no need to include /proc, /dev, /tmp or /run in your backup.
That _is_ correct of course - I like the quick-n-dirty rsync, but if you want to exclude certain directories, you will really need either a script or a '-X {file}' ...
The way _I_ back up is with a script (running via crontab at 1am) which just checks if a certain uuid is present (that of my 2TB usb external drive) and then just mounts the usb (no automount for my part - I want control), does a 'rsync -av [--delete] /work /usb' and then umount the usb - painless ...
I still think it is unneccessary (in most cases, but I stand corrected) to back up the OS itself unless there are some really exotic conf-files you absolutely need - or any big downloads (but they could be left under /work/Downloads). YMMV though ...

Last edited by perbh; 01-23-2013 at 08:14 PM.
 
Old 01-23-2013, 10:36 PM   #23
smith517
LQ Newbie
 
Registered: Jan 2013
Distribution: Slackware
Posts: 7

Original Poster
Rep: Reputation: Disabled
I'm very annoyed on how most windows backup software is all or nothing. Windows has a recovery feature (restore points) that works fine. I hate having to want 5 days for the windows directory to finish backing up. /rant

I just make .tar.gz files of whatever directories I label to be backed up. Script with config file. 3 days fighting with life to find time for the necessary research, though. :/

EDIT: As the thread is solved, I guess there really is no topic. lol

Last edited by smith517; 01-23-2013 at 10:37 PM. Reason: Random fridge logic.
 
Old 01-23-2013, 11:26 PM   #24
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81
Re windows - I'm certainly no fan *lol* - but ... much in the same way, I still don't believe in backing up the OS itself. Since every computer I have with windows (actually, I have none but my wife has two), I also have at least one linux on those two. When I want to back up, you just boot into linux, mount the windows-partition (eg. on /win) and then back up with 'rsync' of /win/Users/$USER (replace $USER with the windows username) Again, its just good practice to skip the Cache-folders ...
'rsync' - at least to me - is the ultimate backup-tool. Using 'tar' (as so many do) - is like a windows backup - you get it all! 'rsync' only copies files that are new or have been changed! When you have several gigabytes of files to back up - you will soon find the difference between tar and rsync.

Last edited by perbh; 01-23-2013 at 11:28 PM.
 
Old 01-23-2013, 11:29 PM   #25
smith517
LQ Newbie
 
Registered: Jan 2013
Distribution: Slackware
Posts: 7

Original Poster
Rep: Reputation: Disabled
I didn't know that! I'll look up on this. I'll respond when I have a system with which to backup stuff. If the previous time is any example, don't hold your breath.

I'll go with my usual of auto-script+config, though. I like to change directories without touching the script.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Need advice installing slackware to a GPT/UEFI system arubin Slackware 38 10-21-2012 11:31 AM
File System Advice AxXium Linux From Scratch 3 08-02-2005 11:30 AM
file system advice sought jagmandan Linux - General 2 07-01-2004 08:37 PM
File system Help \ Newbie Advice KarmaKill Linux - Newbie 4 06-23-2004 11:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 04:26 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration