LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 11-23-2008, 09:42 PM   #1
jadeddog
LQ Newbie
 
Registered: Jun 2008
Posts: 26

Rep: Reputation: 15
raid6 questions/implementation


ok, so i've searched the existing raid6 threads, and cannot find all the answers to me questions, so i apologize in advance if these are simple questions but i've never set up a raid system in linux before... be kind please

my situation is that i need a file server that is quite large (for personal usage, not business) and i want redundancy so was thinking about raid6 as my solution... the overall useable size should be in excess of 5 TB at the minimum and preferably 7-8 TB in size... so here are my questions

1. software vs. hardware? i'm thinking about a 12 drive system... do i *need* a hardware controller to do this? assuming i can get a mobo that supports 12 drives, i was hoping to avoid the roughly $700 cost of a pci-x controller that supports 12 drives and use a software raid... is this possible? if it is, are there any real drawbacks (keeping in mind that this box will only be a file server, and doesn't need to do any other tasks)

2. is there any distro that is known for being better for raid6? since raid6 still isn't "official" (as far as i know), i'm wondering if any distro has a good reputation for raid6

3. do i really need a hot drive? since this is just for personal use, i don't much care about rebuild time, so do i really need a hot drive? is there any other advantage to having one other than reduced down time?

4. can i increase the size of a raid6 after the initial build? say i get a mobo/controller that allows 12 drives, but only want to buy 8 right now... does raid6 allow for adding the other 4 drives after the initial build? i was thinking this might be possible using LVM in some manner, but im not an expert with raid/LVM by any means, so im not sure

5. follow up question to #4. if it is possible to add drives after the initial build, do the drives have to be the exact same size and manufacturer?

6. for failed drives, do the replacement drives have to be the exact same?

thats about it, i realize that im asking a lot of you all, but any guidance help would be greatly appreciated
 
Old 11-24-2008, 01:23 AM   #2
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Take a look at man mdadm (software raid).

Here is a wiki to discuss raid: http://en.wikipedia.org/wiki/Redunda...are_RAID.22.29

You will take a huge performance hit with SW raid compared to HW raid.

If you are going to Raid6 for performance (speed) reasons, forget about it. Ethernet 100 transfers at about 9.5MB/s(if you are lucky) and GigE at about 95MB/s. Most modern drives are in the 70MB/s range with quite a few in the 100MB/s range(sata/pata drives). So if your pipe cannot transfer the data as fast as the drives, it is just a waste. For a Video server application a 70MB/s drive is plenty fast. The reality is that no matter what Raid solution you use, you are going to need backups. Do not think for a minute that any raid solution will protect you. I lost a server in the basement to a baseball coming through a window and it rained before I got home. In these type cases raid will do you zero(0) good. I had a client have an aquarium break in the living room with the similar results. Just look at your cost to value ratio. For a home system, I would keep things backed up and just run 8 1Tb drives. Once you go to raid you will need twice as many drives to get the same storage. Plus the cost ($ for HW or cpu performance for SW) of the Raid itself.
 
Old 11-24-2008, 02:54 AM   #3
jadeddog
LQ Newbie
 
Registered: Jun 2008
Posts: 26

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by lazlow View Post
Take a look at man mdadm (software raid).

Here is a wiki to discuss raid: http://en.wikipedia.org/wiki/Redunda...are_RAID.22.29

You will take a huge performance hit with SW raid compared to HW raid.

If you are going to Raid6 for performance (speed) reasons, forget about it. Ethernet 100 transfers at about 9.5MB/s(if you are lucky) and GigE at about 95MB/s. Most modern drives are in the 70MB/s range with quite a few in the 100MB/s range(sata/pata drives). So if your pipe cannot transfer the data as fast as the drives, it is just a waste. For a Video server application a 70MB/s drive is plenty fast. The reality is that no matter what Raid solution you use, you are going to need backups. Do not think for a minute that any raid solution will protect you. I lost a server in the basement to a baseball coming through a window and it rained before I got home. In these type cases raid will do you zero(0) good. I had a client have an aquarium break in the living room with the similar results. Just look at your cost to value ratio. For a home system, I would keep things backed up and just run 8 1Tb drives. Once you go to raid you will need twice as many drives to get the same storage. Plus the cost ($ for HW or cpu performance for SW) of the Raid itself.
no i wasn't going to raid for performance... moreso for data backup to be honest... i know, i know, raid isn't great for backup, but i can't afford to double up my drives for a true backup (which is really just raid 1 and doesn't provide good backups anyhow) and i can't afford to spend $2000 on a tape backup system... unless you know of a backup system for around $500-750 that would allow me to backup around 7-8 TB of data??
 
Old 11-24-2008, 11:57 AM   #4
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
That is just the point. Any backup raid solution will require twice (ballpark) as much hard drive space in order to get backup. In your case in order to get 8TB of storage you will have to have 16TB(raid6) of disk space(16 1Tb drives). Take a look at the link I provided. The most efficient (space wise) raids are 3,4, and 5. They are only 62% efficient. 8tb/62=13Tb(close but not quite 2X). Raid 6 is less efficient (roughly 50%), so you will need 2X the drives. Assuming that the vast majority of the data will not change over time(video collection, etc) the easiest way out is just to use backup drives in a singular fashion(either usb or drive slides). Add new stuff to main drives and backups at the same time. When the backup drive is full, move to the next.
 
Old 11-24-2008, 08:16 PM   #5
jadeddog
LQ Newbie
 
Registered: Jun 2008
Posts: 26

Original Poster
Rep: Reputation: 15
yeah i guess i could just go with a raid1 with LVM, so i can increase the size as needed .... im just trying to get around having to spend double on the drives... grrr
 
Old 11-24-2008, 08:31 PM   #6
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Not a big fan of LVMs either. They are fine until they break. When they break, hold on to your shorts. IF you need HUGE files (file size larger than drive size) then you have no choice. What you can do(assuming normal size files)is just put the mount points of the added drives within the current drives. This way each new drive (or drive set) appears as a sub directory of the main drive(or drive set). If anything catastrophic does occur(using this system) you can always mount the added drive sets independently from the master drive. Whereas if a LVM breaks, everything is broke. This is the way I prefer to setup drives. First set(2 drive array) as master and then each following set (again 2 drives) mounted as a sub directory of the master. If the master breaks you still have access (manual mount elsewhere) to all your other drives. If any of the secondary sets break, you just unmount them and repair, without interfering with the rest of the data. I also keep the OS on separate drives from the data.
 
Old 11-25-2008, 07:10 PM   #7
uberNUT69
Member
 
Registered: Jan 2005
Location: Tasmania
Distribution: Xen Debian Lenny/Sid
Posts: 578

Rep: Reputation: 30
Take a look at LVM on Linux raid10. (Still 2x the number of drives)
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID6 Setup Questions carlosinfl Linux - Hardware 3 05-22-2007 09:44 AM
p2p network java implementation -design/scalability questions kpachopoulos Programming 0 02-16-2007 07:45 AM
p2p network java implementation -design/scalability questions kpachopoulos Programming 0 02-16-2007 07:43 AM
Questions on Linux 2.4.20 TCP fast recovery and ECN implementation enjoyzj Linux - Networking 0 07-16-2004 07:57 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 02:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration