Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm looking for outside opinions on the best RAID array setup for a new server that will be used primarily for virtualization. It will run either Ubuntu Server or CentOS.
Specs:
2 x 2.5GHZ Xeon Quad Core Procesors
12GB RAM
3Ware 4-port SATA II RAID (0/1/5/10 - Hot)
4 x 750GB SATA II @ 7200RPM's
My gut tells me RAID 5 or 10, but I'm interested to hear other's thoughts.
Thanks
Last edited by seattleweb; 11-02-2008 at 07:52 PM.
I'm looking for outside opinions on the best RAID array setup for a new server that will be used primarily for virtualization. It will run either Ubuntu Server or CentOS.
Specs:
2 x 2.5GHZ Xeon Quad Core Procesors
12GB RAM
3Ware 4-port SATA II RAID (0/1/5/10 - Hot)
4 x 750GB SATA II @ 7200RPM's
My gut tells me RAID 5 or 10, but I'm interested to hear other's thoughts.
Thanks
RAID 5 would be ideal for your setup (it would maximize disk space while also giving some data safety), but you want to make sure your 3Ware card is a real HARDWARE RAID card. If it doesn't have a chip on the card that will do the parity calculation for RAID 5, the CPU is used which leads to higher CPU utilization and slower RAID 5 throughput. Most hardware RAID cards are pretty expensive ($150-$200+) so if you paid less than that, it is probably fakeRAID (offloads all calculations to CPU) and I would strongly recommend NOT using RAID 5 and USING RAID 10 instead (it is much less hard on the CPU).
All 4+ port 3ware cards are true hardware RAID. It won't offload any work to the CPU.
The choice between RAID 10/0+1 and RAID5 comes down to what this server is doing. 5 gives more space, and 10/0+1 allows faster reads and writes.
What is this server doing? If you need something that handles 100 concurrent connections to clients while all of them are doing MYSQL queries and writes, then you'll probably want RAID 10/0+1, for the speed. If the machine needs to hold 2 TB of data, but isn't likely to have more than 10 simultaneous connections to clients, RAID5 is fine.
Its not a question of hardware, it is a question of function.
By the way, by RAID 10/0+1 I mean either mirrors of stripes or stripes of mirrors. They are similar but not the same. With 4 disks you get the same speed and size doing stripes of mirrors or mirrors of stripes, but the data is not stored the same. The name convention RAID10 I take to be mirrors (1) of stripes (0), but not all cards treat it that way.
With so much speed on the other components I would think you should go for some speed on the drives too. So I vote Raid10.
With a good backup regime, I might have been tempted to go for Raid-0, stripes over the 4 disks
No, just RAID 0 is an invitation for disaster! RAID 0 combines disks, but a failure on any one of the disks corrupts all the data on the entire array. Think of how many times you've seen a disk fail on a desktop machine. Now you put 4 disks in a server, and if any 1 of those fail, everything is gone. RAID 0 only makes sense on a server if it is coupled with RAID 1, which is what RAID 10 is, more accurately it would be written RAID 1+0. When you have mirrors of stripes, any one disk can fail and you can rebuild. RAID0 basically shouldn't exist on its own.
Your idea to use RAID10 (RAID1+0) on the server is fine. I doubt you'll have anywhere near enough traffic to make that necessary, but it is a fine choice.
If you mean something like the "3Ware 9650SE-xxxx RAID Controller - 4-Port SATA, RAID 0,1,10,5,JBOD, Multilane Connector, Low Profile - PCI-Ex4", it sounds as if you have at least the -4LPML version, or you wouldn't have RAID 5 & 10 as options...the 2LP doesn't have battery back up as an option (which should be a visible difference) and won't do RAID 5 or 10.
It looks as if all of this series of cards have an XOR engine. If you don't have an XOR engine you have to be careful about which RAID mode you choose, if you want performance, because XOR can be heavily used in parity calculations.
(Actually, what they say is that they have "3ware's parallel XOR RAID 6 parity generation algorithm maximizes RAID 6 throughput, so that RAID 6 enabled 9650SE controllers deliver unequaled RAID 6 performance" and that seems to imply XOR in hardware, but doesn't quite say it. And, if it is true that they all have this, its unclear why the 2LP should omit RAID 5 and 10.)
The trouble with a lot of RAID solutions from the less experienced players in this market is that they tend to be poor in robustness to odd transient fault conditions; I haven't tried 3Ware, but I suspect that they have been doing this long enough that there is some chance that they have got this right.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.