LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 07-24-2014, 03:00 PM   #1
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
RAID10 or RAID5 for media server?


I just purchased the last 3 drives required to upgrade my server to a 6
by 3TB RAID for media storage and video playback. I plan on upgrading
to CentOS v7 at the same time. The server has the following tasks:

1. NFS server for both back-ups for data on laptops and Media playback
either via XBMC, PlexMedia, WDTV.
2. F@H runs 24/7
3. local Minecraft server for my son.

Thats all it currently does, I might add other services to it later.
With that stated the video playback is its #1 priority. Everything else
can go by the wayside, but not the playback performance/quality of
streaming in the LAN. Im not as concerned with WWW playback via Plex at
this point in time.

I've been mucking around with different RAID levels and was thinking of
going RAID 10, but some fast calculations tell me that I would only have
a little less then 9TB of storage if I go RAID 10. On the other hand if
I go RAID 5, i get 15TB of storage and still have protection for data
loss if I lose a drive, 2 with 6 disks.

Is that correct? Also is there enough of a performance boost of 10 over
5 to justify losing 50% of the total drive(s) capacity for data
retention for loss drives?

Keeping in mind just about everything on the server is replaceable, this
is not for a business, this is just personal use, but I would like a
layer of protection, thus the reason Im not looking at RAID 0 with high
performance, but zero data protection and if you lose one drive, you
lose 100% of the data. Thats just not worth it to me.

I am currently sitting on no less then 4.6TB of media storage, so while
9TB will basically double my current used space, 15TB would be longer
lasting before I had to increase storage amount.

Thanks in advance.
 
Old 07-25-2014, 08:14 AM   #2
roreilly
Member
 
Registered: Aug 2006
Location: Canada
Distribution: Debian, Slackware
Posts: 106

Rep: Reputation: 28
Before you consider raid 5 with those 3TB drives, read this:

http://www.zdnet.com/blog/storage/wh...ng-in-2009/162

I went raid 6 for this reason in my 8x 3TB array.
 
Old 07-25-2014, 10:17 AM   #3
Guttorm
Senior Member
 
Registered: Dec 2003
Location: Trondheim, Norway
Distribution: Debian and Ubuntu
Posts: 1,453

Rep: Reputation: 447Reputation: 447Reputation: 447Reputation: 447Reputation: 447
Hi

You have 6 3Tb disks no?

Raid 10 is more sensible with 4 disks and you lose half the capacity. The reason to use Raid 10 is speed.

With Raid 5 you lose the size of one disk. Reading from Raid 5 can be quite fast, but writing is slower. When you have lots of disks, it becomes faster.

For a media server, the speed of the disks are usually not that important. It doesn't matter at all when playing movies, music and such. When copying to and from the server it matters, but there is usually a much bigger diffenece if you use gigabit ethernet cables instead of WIFI.

With Raid 6 you lose the size of 2 disks, but you can have 2 disks fail and it still works.

And be sure you set up some service that sends e-mail when a disk fail. If you use mdadm, it will send email to root@localhost. If you don't read that, forward it somewhere else.
 
Old 07-25-2014, 10:33 AM   #4
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Original Poster
Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
thanks both of you. i was thinking between 5/10 due to my MB onboard raid support, but after some further research ill be investing in a raid card, or possibly creating a software raid via Cent...

Ill prob. go with RAID 6. many thanks.
 
Old 07-25-2014, 01:37 PM   #5
yo8rxp
Member
 
Registered: Jul 2009
Location: Romania
Distribution: Ubuntu 10.04 Gnome 2
Posts: 102

Rep: Reputation: 31
please use software RAID . why ? just coz in order to report faulty drives , in HW should see BIOS reports( implies PC reset) , in software just interogate /proc/mdstats. SW raid allow to stop/remove/add drives online in a blink of an eye (more or less resync) , while some AMI or Phoenix BIOS_es RAID config pages are not so user friendly. Main down fall with SW raid is to create separate boot partitions or use separate cloned drives and/or cloned 512 B to preserve MBR just for redundancy boot sake !

on my raid 1 2x 1TB i do use 2 x 8 gb cloned mini kingston flash drives just for boot / boot partition redundancy,

Last edited by yo8rxp; 07-25-2014 at 01:45 PM.
 
Old 07-25-2014, 09:31 PM   #6
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,985

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
If I were to run a raid, I still think I'd run a real true hardware raid. They are not cheap. The faux and software may be great for mirrors but if you really want speed, you have to use hardware. Now, as to you needing it...well... I don't know.
Some may argue that ZFS or Btrfs will provide the features but you'd have to set it up correctly. I was kind of shocked to see XFS come back to favor so you could easily run that on a hardware raid and be sure that the sales pitch on speed was met.

We still use hardware raid on almost every server for many features besides just speed.

Last edited by jefro; 07-25-2014 at 09:33 PM.
 
Old 07-25-2014, 10:45 PM   #7
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Original Poster
Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
im a bit skeptical of a software raid specifically as they run off the CPU and my CPU is rather well hammered with the F@H client.
 
Old 07-26-2014, 12:10 AM   #8
yo8rxp
Member
 
Registered: Jul 2009
Location: Romania
Distribution: Ubuntu 10.04 Gnome 2
Posts: 102

Rep: Reputation: 31
Quote:
Originally Posted by lleb View Post
im a bit skeptical of a software raid specifically as they run off the CPU and my CPU is rather well hammered with the F@H client.
You say it would run off the CPU, I simply put 2 x 1TB rated at 140 MB/s in a RAID 0 and seen no CPU load at all while running dd tests ,and it worked as supposed too , just about 280 MB/s write speed on software raid.
To be honest , how many successfully hardware raid did you alter in time in order to add/remove drives from a personality or another without compromising the entire personality itself ??? you name it ..Dell PERC 5 is waste of time, Dell Power Edge 1950 III simply broke down my brain synapses , asus bios looks great but the risk of reinitializing drives is huge thus loosing data , got sick of it over time and sticked to MDADM linux raid , runs great , no cpu loads,real time RAID monitoring , real time resync monitoring and so forth.
All the above were done by a middle specs machine like amd x4 640 3 Ghz stock ,with just 4 Gigs of 1333 RAM
True , i did not test it on 2 x ssd drives , so see whatever S Raid can handle speeds like 1 GB/s while stripping 2 SSD in raid 0 !
Sincerely ,
Gabriel linux-romania dot com
 
Old 07-26-2014, 02:51 PM   #9
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Original Poster
Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
i really wish there were some hard comparisons of software raid from RHEL v7 vs typical hardware solutions. i looked into ZFS, but sadly I only have 8G RAM so I would have to either double the RAM or max it out, I dont know if my MB can take 32G RAM or not without more digging.

ZFS sounds like a great option, but not atm at least unless someone can convince me its well worth the $$$ for the increased RAM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Need RAID5 Web Server with 40TB, suggestion MrUmunhum Linux - Server 1 03-27-2010 04:43 PM
Recreate a Raid5 MD0 into a Raid5 MD3 CADIT Linux - Server 2 01-11-2010 04:46 AM
Setup up lvm raid1(boot,root,home),raid5(media) wallbunny Linux - General 4 10-01-2009 04:59 AM
Multi Layer RAID50 fail (Intel SRCS14L RAID5 + 3ware 9550SX-4LP RAID5)+Linux RAID 0 BaronVonChickenPants Linux - Server 4 09-27-2009 04:06 AM
rebuilding a raid5 array for file server. javaholic Linux - Server 5 04-21-2008 09:01 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 11:08 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration