Quote:
Originally Posted by pvs
1)if you have no exact plan (no special needs), then the best choice is to make a single partitions for all
|
I disagree with that. Nowadays there are multi gigz HDDs and to have one big root on that beast is complete disaster. Imagine you have to fsck this root, you will be waiting for ages, I promise!
Much better idea is to have a moderate root (say, 10 to 20 gigz if all the stuff is here, I mean /usr,/var, etc. or even 1g if all the stuff is not on this partition). And I suggest to have all other (user) data on /home, /export/home or any name you like. Trust me, with a moderate root you will get you system up quickly regardless of other pending issues you may have with the data FSs.
And yet, do not make a separate /boot if you do not have a strong motivation to do so, e.g. root on LVM and so on.
Remember, /boot used do not have any salvage utilities, compare with root (/sbin/e2fsck and so on). That's a trap for cubs.
Also, this hist is only valid for Linux, not for other kins like freebsd or solaris.
Quote:
Originally Posted by pvs
2)hardware raid has one big minus: when your controller fails you need exactly the same controller with exactly the same settings to get back your data. On the other side hardware controller is a little faster - choose what you need more: speed or ease of management.
|
Agree with issues while RAID controler replacement, what a headache!
Disagree about "speed" of HW raid. It depends upon many factors.
It is funny to compare say i960 with 66MHz bus (a typical CPU on HW raid) and even a desktop having today's pci-x/pci-e buses and >2ghz CPU.
In my practice, VxVM runs well on par with built-in HW raids.
Again, linux sw raid was twice or even more faster than a nibbled Intel low-cost (about 250$) HW RAID despite all salesmen claims. Though an expensive 700$ HW RAID was run very well as fast as free linux sw raid
.
Quote:
Originally Posted by pvs
3a)create raid5 is good idea when you have spare drive. In case of failure of one drive the data will be recomputed using checksums - it 'll cause significant slowdown. When you have spare the system will be slow only while creating neccesary data on it, when you have no spare - the system may get completely unusable even under intermediate loads.
3b)in case of mirror the spare is a waste of drive (in my opinion). As for me it's better to keep it single and use for non-critical data: logs, swap, caches, temporary files etc. In case of failure you can simply remove it - the mountpoints will be empty and the system will use the space on the mirror to store all this stuff - until you replace your drive
|
A spare drive for mirror is not a waste if you:
a) cannot get in quickly and swap that bad drive (remote sites)
b) are not sure about your drives
c) have a lot of mirrors, think about MTBF and normal event probability distribution.
Swap is not a "non-critical data". Its mishandling could cause a system panic.
If you have a lot of gigz and do not want a suspend, maybe it's better to turn it off completely? I said this because Linux is not so aggressive to pull to swap all the stuff as Solaris or FreeBSD, it feels good even with no cache at all if you have a sufficient RAM for you tasks, believe me.
In case of RAID5 to have a spare drive is a MUST. Otherwise you'll be lost.
Though generally I agree with your common sense idea to allocate thrid drive for non-critical data like a squid cache.
Yet another advantage of linux sw raid: adding a drive to mirror is easy and straightforward so it can be done just when needed.