Replacing old SAN - is this the better solution?
I have an old SAN connected via Fibre to 1 server, which then exports that SAN via NFS. I had a drive failure (raid5) and a replacement is over $600, and since it's so old we wish to replace it and free up some space (currently the SAN, plus the old server, the ups for that and the 2ndary controller is around 8U in our rack).
I am looking at maybe something like a buffalo iSCSI 1U rack that has failover to a second server server (we would purchase 2) but wondering on performance.
I thought server1 would export some things, server2 would export other things, and behind the scenes, they would replicate and failover to the other in case of hardware failure, but not sure if that will give me the same performance, better, or even where to start. (Do I make seperate LVM volumes, export them vs 1 large partition), etc.
Really new stuff for me on this one, so looking for things to look at, avoid, etc.
A Honda is not a Rolls Royce, but a lot of people still buy them.
An enterprise RAID system has hot-swappable parts, mirrored non-volatile write cache, dual RAID controllers, and enterprise disks.
A low-end RBOD has a single RAID controller, "good quality" SATA drives, and you provide a UPS and hope for the best.
Your plan of active-active operation and simultaneous replication and failover, even of different LUNs, sounds complex to me. Is this mode supported by the storage? How will you test it? If you do this, do not try to combine them. Export as separate volumes. Personally I would be cautious and do active-standby unless I had experience with these systems.
Performance will mostly be dictated by the number of drives and how well your access is spread across them. A smaller number of larger drives will give worse performance than your old system with more smaller drives.
Realtime replication will slow write performance quite a bit. They can cheat and do write-behind, which means you lose some data on a failover. Read the specs carefully, this may be configurable. Also, if the replication link goes down, how does it tell you? Nobody ever checks the pressure on their spare tire until they need it.
Search for linux mirror cluster failover and be ready for a lot of reading.
Architecting Linux High-Availability Clusters
Thanks for the replies, and smallpond, your analogies are simply priceless!
The storage device handles a lot of the test/failover and notices, but from my endless readings, they do tend to be a bit slow since they only ship with 5400 RPM drives and you cant buy the device diskless. My other thought as you said was to get a 1U with 6 2.5 bay's and go with an open source package like openfiler (http://www.openfiler.com/) but I have never used it so it's a bit of an investment to test, so I may do an in-house test first.
@whizje- thanks, going to get a large cup of coffee and start reading!
I don't want to save a dollar and re-pay for years stuck with expensive hard drives, etc. which is why I want to go with normal drives, or maybe a 1U with 4 15K disks to get even faster performance while keeping the 1U footprint!
Thanks again guys :)
|All times are GMT -5. The time now is 11:09 PM.|