Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Planning a new server install (to eventually run Zimbra) and I'll be going with CentOS since our file server and intranet servers both run it as well.
The server will only be serving 10-15 users so I've gone for plain old 500GB SATA drives and I'm planning to put 4 of these in a RAID 10 (we have 4 spare after recently upgrading our file server).
I've never experimented with Linux software RAID before but it seems like its been around a while and I've read other people using it on a day to day basis.
For this server I thought I could get away with software RAID 10 since it should be less demanding on the CPU compared to RAID 5/6.
And I'd like the improved performance that RAID 10 offers over a plain RAID 1 mirror.
It seems relatively easy to set up under CentOS as there's software RAID options during the install process but one thing I did read is that the /boot partition has to be installed on a RAID 1
So how about this for a plan:
Create 4x 200MB software RAID partitions (one per drive) then configure them in a RAID 1 for /boot
Then 4 other software RAID partitions (one per drive) using the remaining space for the RAID 10
But then what do I do about swap?
Any other comments/suggestions on this?
Is it actually possible to boot from a Linux software RAID?
Have you thought about using a SATA raid card ?... then you can choose a raid format that works for you data-wise rather than worrying about cpu usage.
Yes, you can boot off software raid ( at least raid 1 that I'm aware of )
Yeah thought about hardware RAID. We actually use a 3Ware 9690SA-8i in our file server with 8x1TB drives in a RAID 6.
But I was under the impression that RAID 5/6 need significantly more oomph than RAID 0/1/10 because of the added parity calculations.
Ideally I would go hardware RAID but the CPU in this new box is a quad-core Xeon X3360 so I think it's got more than enough grunt to handle RAID 10.
I did think I'd need to make a RAID 1 for the /boot partition then the rest of / can sit on a RAID 10.
My main question was the stability of software RAID, is it still experimental or would running a software RAID 10 be fine?
Also what do I do about the swap partition in the CentOS installer?
Personally I use hardware raid every time, but lots of people use software raid and it is fairly mature.
Regarding the swap partition - I wouldn't worry too much unless you happen to be running some memory intensive applications, linux usually won't touch swap much unless it's running out of physical memory.
( when I say "dont worry", I mean about location, you do still need swap )
Just to clarify, its not that /boot has to be installed on RAID1, its just that if you want to RAID /boot, RAID1 is the only option (just in case anyone wondered).
Linux SW raid is pretty mature.
Given swap is only used as temp workspace when the kernel runs out of RAM, I probably wouldn't bother raiding it.
You might want to have 2 swap partitions (separate disks) to minimise downtime if 1st one goes bad.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.