LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 08-02-2009, 05:14 AM   #16
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208

Hello oli
Quote:
Originally Posted by oli View Post
Hi Charles. Basically what lead me to my current situation is ...
Thank you for sharing your thoughts. I had, pro-tem, accepted the "hardware is best" maxim without fully understanding so it is very helpful to have a balanced, detailed analysis of the pros and cons.

Best

Charles
 
Old 08-02-2009, 03:41 PM   #17
eRJe
Member
 
Registered: May 2005
Location: Netherlands
Distribution: Slackware 14.1 Kernel 3.12.1
Posts: 103

Original Poster
Rep: Reputation: 16
I still like to believe that RAID(-5) would be a good thing to do when you add multiple drives together. However there seems to be a group of people who argue that RAID won't help you much in case of HD failures because the chance of having multiple drives to fail is just as high(?) as having only one to fail. (weather it is the drive itself or the controller who messes up things) I don't know how much of that is true but funny enough this happened with my RAID-5 array last week!

For this reason I would consider RAID-6. But I guess that would then be pointless too, because if two drives can fail, 3 drives could also fail?!

If this is really true, I suppose those who say RAID is more complex to configure, maintain and recover, and you probably still loose data, are right. In that case you would be better of with two JBOD arrays. One with the original data and one for backup. Having the backup preferably somewhere else. Although it is double the price to add space, you will probably save quite a few bucks on other hardware and time.

So why is RAID-5 still being used if it's pointless?

Coming back to my situation. I think Oli's solution could work for me. I create a RAID-5 array with LVM on top. When the array gets full, I buy a new set of drives with the best $ per GB rate and add them to the volume group.

I considered Electro's tip of getting myself a DAS with RAID. But those are a little bit out of my budget. Instead I will get myself a 16 bay server case. I just need to find a nice SATA controller. Any suggestions? Still deciding weather to go software raid or hardware raid but I think software.
 
Old 08-04-2009, 08:07 AM   #18
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Quote:
Again more drives you add to any computer there is a higher chance than one drive. Three hard drives have three times more chances of failure compared to one disk.
Wrong - its not linear like that. There is a chance that one of the drives could go bad but in true RAID you lose no data if one drive goes bad. However with a single disk configuration you lose data if the single disk goes bad.

RAID 0 as someone pointed out isn't really RAID because it is JBOD (just a bunch of disks) meaning you're essentially using single disks for each of your filesystems/raw devices so have the same risk as a single disk configuration.

The only other time multiple disks increases your risk of losing things is in a simple concatenation configuration (like LVM without mirroring) because your logical disk would span multiple disks but not have any redundancy. The only time I've done such concatenation at software level was because I had RAID at the hardware level. (e.g. with Dell's PERC card.)

There is no scenario in which true RAID increases your risk over single disk configuration from a disk standpoint. The R stands for Redundancy and you're attempt to change it to something else doesn't "really" fly.
 
Old 08-04-2009, 09:15 AM   #19
oli
Member
 
Registered: Jun 2007
Location: Australia
Distribution: Centos and Fedora
Posts: 34

Rep: Reputation: 16
If you're going to believe this argument about RAID being useless and the idea that "if a disk dies then another disk can die then another disk can die taking my data with it" then you may as well just delete all your data and stop using computers. After all, your disks could die, your controller could die, your backup disks could die, and the house with your offsite backups could die and the company who hosts your remote backups could dissapear off the face of the planet, all at the same time!

Quote:
Originally Posted by eRJe View Post
Coming back to my situation. I think Oli's solution could work for me. I create a RAID-5 array with LVM on top. When the array gets full, I buy a new set of drives with the best $ per GB rate and add them to the volume group.

I considered Electro's tip of getting myself a DAS with RAID. But those are a little bit out of my budget. Instead I will get myself a 16 bay server case. I just need to find a nice SATA controller. Any suggestions? Still deciding weather to go software raid or hardware raid but I think software.
I think you'd be making a good choice with software RAID. Maybe I'm biased since you're liking my suggestion though.

I have a large Lian Li tower case and have a total of 16 drives in it. As long as you get a good power supply you should be fine.

How fast do your storage needs grow? That really determines what kind of controller you should get...
 
Old 08-04-2009, 11:07 AM   #20
eRJe
Member
 
Registered: May 2005
Location: Netherlands
Distribution: Slackware 14.1 Kernel 3.12.1
Posts: 103

Original Poster
Rep: Reputation: 16
On average I probably would like to expand my storage with 2 tb every 3, 4 months.

I'm thinking of getting the 16 bay 19" from Chenbro and first have one 8 port SATA controller from supermicro. By the time it get's full, I'll get an other one and create a new RAID array which I can add to the volume group. I could also get a SAS controller. Don't know which is best?

How big power supply do have in that tower Oli?
 
Old 08-04-2009, 07:52 PM   #21
oli
Member
 
Registered: Jun 2007
Location: Australia
Distribution: Centos and Fedora
Posts: 34

Rep: Reputation: 16
Quote:
Originally Posted by eRJe View Post
On average I probably would like to expand my storage with 2 tb every 3, 4 months.

I'm thinking of getting the 16 bay 19" from Chenbro and first have one 8 port SATA controller from supermicro. By the time it get's full, I'll get an other one and create a new RAID array which I can add to the volume group. I could also get a SAS controller. Don't know which is best?

How big power supply do have in that tower Oli?
I don't know much about SuperMicro controllers. All I can say is that 3ware and Adaptec have served me well. Always make sure you look at the knowledge base for the controllers before buying your drives though! I bought two 1.5TB Seagate drives and they aren't supported properly by my newer Adaptec controller, of course I only discovered this after experiencing a week or so of uptime then a crash with SCSI timeouts, etc...

SAS controllers are good for cabling (thinner and more flexible) but they're more expensive.

I don't know if you can get these cases easily from near where you live, but they are highly regarded for enthusiasts because they're much cheaper than Chenbro and other brands:

http://www.norcotek.com/item_detail....delno=RPC-4020

Here in Australia people are importing them from the USA and even after all the shipping costs they are paying about half of what the Chenbro cases cost locally... Definitely worth investigating.

I think I have a Thermaltake 620 or 680 watt power supply.
 
Old 08-05-2009, 03:48 PM   #22
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
RAID level 0 is still categorizes as RAID because it is an array of inexpensive disks. If you do not like RAID-0 being categorizes as RAID, do not use it.

Another reason why I state RAID as Really an Array of Inexpensive Disks is because RAID should be stated as AID and R should be thrown to the trash. IT professionals still want to state R in RAID as redundant can do so if they want to follow that band. I prefer to follow my band. If IT professionals cares enough about the R in RAID being redundant they should take redundant procedures such as backups and hot spares. RAID is never redundant when there is a hard drive failure because you have to hope during the data reconstruction time that another drive will not fail. Worrying about failure is not any close to RAID being RAID. It is an AID. It provides you more performance and disk space that one single disk can not do at this time.


Quote:
So why is RAID-5 still being used if it's pointless?
The benefits of RAID-5 gives the user more time before failure to put in a disk as a hot spare although it is still vulnerable during the time of data reconstruction. RAID-5 performances comes in when multiple writes have to be done, but its reads is one at a time. This depends on its software from one controller and the other. Also software RAID may not include this feature or it might. RAID-6 is the same, but it gives two chances to the user to place in a hard drive as a hot spare.

For your setup, RAID-100 and add it to either LVM or EVMS. Probably you want to learn more about EVMS because it has features that you can use if you have reach the limit of the number of controllers for one server. You can setup RAID-100 by using two DAS enclosures that are setup in RAID-10. Then use software RAID to combine the two in RAID-0. If each enclosure has a throughput of 80 megabytes per second, a total of 160 megabytes per second should should provide enough bandwidth for a 1 Gb network.

If your goal is 4 TB and few months from now is another 4 TB, it is probably best to set your goal to 8 TB. For this amount of space, you should think about a rack. Just do not forget you have to do the same for the back up server. To save space, try archive your old projects to a Blu-Ray disk.

I recommend stay away from Seagate drives. I recommend Hitachi or Western Digital because they do not have defects like Seagate. Seagate's 1.5 TB hard drive has a defect and they are not recalling them.
 
Old 08-05-2009, 07:08 PM   #23
oli
Member
 
Registered: Jun 2007
Location: Australia
Distribution: Centos and Fedora
Posts: 34

Rep: Reputation: 16
Quote:
Originally Posted by Electro View Post
RAID level 0 is still categorizes as RAID because it is an array of inexpensive disks. If you do not like RAID-0 being categorizes as RAID, do not use it.
There's no redundancy. It's a mistake that it was ever called RAID. It should just be referred to as disk striping.

Quote:
Originally Posted by Electro View Post
Another reason why I state RAID as Really an Array of Inexpensive Disks is because RAID should be stated as AID and R should be thrown to the trash. IT professionals still want to state R in RAID as redundant can do so if they want to follow that band. I prefer to follow my band.
You aren't following a band. You're leading one, and it's a solo, not a band.

Quote:
Originally Posted by Electro View Post
If IT professionals cares enough about the R in RAID being redundant they should take redundant procedures such as backups and hot spares.
Backups do not provide redundancy. They provide a backup to restore from when things fail completely. Restoration takes time and this is downtime.

How on earth are hot spares supposed to work if you don't have RAID? Are you saying in a single disk setup which you seem to think is superior we can have a hot spare which takes over once the first drive dies? It takes over magically without having any data on it does it? Or does it magically get data from the dead drive then continue running without downtime.

Quote:
Originally Posted by Electro View Post
RAID is never redundant when there is a hard drive failure because you have to hope during the data reconstruction time that another drive will not fail. Worrying about failure is not any close to RAID being RAID. It is an AID.
Have you heard of RAID6?

With most other RAID levels you do have to hope that. But it's better than running a single disk setup where rather than hoping anything you just know with 100% certainty that a disk dying is equal to downtime.

Quote:
Originally Posted by Electro View Post
It provides you more performance and disk space that one single disk can not do at this time.
Wrong again, There are no true RAID levels which provide more disk space. They all lose some space for added redundancy.

Quote:
Originally Posted by Electro View Post
The benefits of RAID-5 gives the user more time before failure to put in a disk as a hot spare although it is still vulnerable during the time of data reconstruction.
RAID5 does not need a hot spare. The data is less vulnerable than running a non RAID setup.

Quote:
Originally Posted by Electro View Post
For your setup, RAID-100 and add it to either LVM or EVMS.
RAID100? Is this another made up term you have introduced to your solo camp? I can't find any reference to it anywhere and have never heard of it.

Why are you suggesting RAID for the OP's setup? You keep saying it's useless.
 
Old 08-06-2009, 07:00 AM   #24
eRJe
Member
 
Registered: May 2005
Location: Netherlands
Distribution: Slackware 14.1 Kernel 3.12.1
Posts: 103

Original Poster
Rep: Reputation: 16
Quote:
Originally Posted by oli View Post
I don't know if you can get these cases easily from near where you live, but they are highly regarded for enthusiasts because they're much cheaper than Chenbro and other brands:

http://www.norcotek.com/item_detail....delno=RPC-4020

Here in Australia people are importing them from the USA and even after all the shipping costs they are paying about half of what the Chenbro cases cost locally... Definitely worth investigating.
They have some nice products! Unfortunately there is no local shop who sells there products and I'll be paying $185 for shipping, which makes it almost as much as Chenbro products. Adding the risk of having to pay additional tax fees, I'll think I rather go for the locally available Chenbro. But thanks for the link!

@Electro, I would have liked to go with one or more DAS enclosures but it's just a bit out of my budget. But by getting myself a 16 bay server case and 1or 2 (RAID)controllers, I can also create 2 or 3 RAID arrays and add them together with LVM.

Last edited by eRJe; 08-06-2009 at 07:06 AM.
 
Old 08-06-2009, 07:18 AM   #25
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
I'm beginning to think there are slightly reformed Amish persons on the list. They finally convinced themselves to use technology but can't quite force themselves to use ADVANCED technology.

RAID is surely an abomination and God will smite thee for using it.
 
Old 08-06-2009, 07:52 AM   #26
oli
Member
 
Registered: Jun 2007
Location: Australia
Distribution: Centos and Fedora
Posts: 34

Rep: Reputation: 16
Quote:
Originally Posted by eRJe View Post
I'll think I rather go for the locally available Chenbro. But thanks for the link!
Fair enough. Another thing you should keep in mind is that this type of equipment is designed to go into rack environments where noise is generally not an issue at all. Trust me, you do not want one of these things sitting next to your desks because the fans are very very loud.


lol @ jlightner
 
Old 08-06-2009, 08:07 PM   #27
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
Originally Posted by oli View Post
There's no redundancy. It's a mistake that it was ever called RAID. It should just be referred to as disk striping.
That is why I call all levels as AID. It suits my acronym better.


Quote:
You aren't following a band. You're leading one, and it's a solo, not a band.
Your thinking is also solo thinking that software RAID is the best. By thinking that this setup is AID instead of RAID makes more sense.

Quote:
Backups do not provide redundancy. They provide a backup to restore from when things fail completely. Restoration takes time and this is downtime.
Backups take up the slack that AID does not have or has lost during a multiple disk failure. Even the best AID setup will fail, so any setup will have downtime.

Quote:
How on earth are hot spares supposed to work if you don't have RAID? Are you saying in a single disk setup which you seem to think is superior we can have a hot spare which takes over once the first drive dies? It takes over magically without having any data on it does it? Or does it magically get data from the dead drive then continue running without downtime.
Hot spares only work with certain levels of AID.

Quote:
Have you heard of RAID6?
If you read my post instead of being a dummy and skimming, then you know that I know about RAID-6 or AID-6.


Quote:
With most other RAID levels you do have to hope that. But it's better than running a single disk setup where rather than hoping anything you just know with 100% certainty that a disk dying is equal to downtime.
If using AID-5, you only have one chance for a drive to fail before doubling your chances of losing all data. An AID-6 gives you two chances of a drive failure before doubling your chances of losing all data.

Quote:
Wrong again, There are no true RAID levels which provide more disk space. They all lose some space for added redundancy.
Actually AID stripping levels does provide more data compared to a single disk. You have to balance between more space or more space with chance of keeping data when there is a drive failure. AID is not redundant in any way. It is just a chance card.

Quote:
RAID5 does not need a hot spare. The data is less vulnerable than running a non RAID setup.
Again, if using AID-5, you only have one chance for a drive to fail before doubling your chances of losing all data. An AID-6 gives you two chances of a drive failure before doubling your chances of losing all data.

Quote:
RAID100? Is this another made up term you have introduced to your solo camp? I can't find any reference to it anywhere and have never heard of it.

Why are you suggesting RAID for the OP's setup? You keep saying it's useless.
RAID-100 or AID-100 is a nested AID level setup. It uses two AID-10 and combines them into a AID-0 which makes AID-100. AID-5 and AID-6 is not always best for one large disk because the chances of losing data increases as more drives gets added.

AID-100 spreads out the chances where drives can fail instead of being concentrated in one spot. Also it adds throughput which should be well over enough for 1 Gb and should be enough for RAW videos. Third, if there is a drive failure, it will not get penalized during reconstructing the data.

Using AID to provide more disk space and more performance. I said nothing about AID-100 being redundant.
 
Old 08-06-2009, 08:26 PM   #28
oli
Member
 
Registered: Jun 2007
Location: Australia
Distribution: Centos and Fedora
Posts: 34

Rep: Reputation: 16
I don't think I'll bother responding, there's clearly no point in debating this topic with someone who makes up their own terminology and dismisses a system that is used all over the IT industry.

I'll stick to helping eRJe as far as this topic goes.
 
Old 08-07-2009, 01:18 AM   #29
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Another rack case possibility is the following.

http://www.addonics.com/products/rai...k_overview.asp
 
Old 08-07-2009, 07:53 AM   #30
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Quote:
Originally Posted by oli View Post
I don't think I'll bother responding, there's clearly no point in debating this topic with someone who makes up their own terminology and dismisses a system that is used all over the IT industry.

I'll stick to helping eRJe as far as this topic goes.
What's sad is he is at Guru level for having over 5000 posts. I wonder how many of those are equally devoid of expertise as the ones he's put on this thread?

What if we get the information from a Major commercial distribution of Linux?:
http://www.redhat.com/docs/manuals/l...e/ch-raid.html

The link even has a brief comment on "Why you should use RAID".

Or how about this except from a published book on Disaster Recovery at:
http://books.google.com/books?id=S1i...ons%22&f=false

Which says in part:
"the question isn't whether you should be using RAID, but rather which RAID level you should be using."

Having used RAID in 4 separate commercial shops including two Fortune 500 and one Fortune 100 companies as a Professional UNIX and Linux administrator for over 10 years I know that corporations see the benefit of RAID even if some hobbyists that can't quite figure out how to administer it do not.

Last edited by MensaWater; 08-07-2009 at 08:52 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
S.W RAID vs RAID accelerator (FastTrakR TX4310 ) @ RAID 1 David Zfira Linux - Newbie 6 07-29-2009 11:13 PM
RAID 10 or RAID 5 - boot with RAID 1 - looked everywhere cognizance Linux - Newbie 8 06-11-2009 05:25 PM
Dell/Intel ICH7 soft-RAID and mdadm raid-level mistake PhilipTheMouse Linux - General 0 03-14-2009 05:59 PM
LXer: Tutorial: Linux RAID Smackdown: Crush RAID 5 with RAID 10 LXer Syndicated Linux News 0 08-14-2008 11:20 PM
LXer: Linux RAID Smackdown: Crush RAID 5 with RAID 10 LXer Syndicated Linux News 0 02-26-2008 09:40 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 09:42 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration