Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I just purchased the last 3 drives required to upgrade my server to a 6
by 3TB RAID for media storage and video playback. I plan on upgrading
to CentOS v7 at the same time. The server has the following tasks:
1. NFS server for both back-ups for data on laptops and Media playback
either via XBMC, PlexMedia, WDTV.
2. F@H runs 24/7
3. local Minecraft server for my son.
Thats all it currently does, I might add other services to it later.
With that stated the video playback is its #1 priority. Everything else
can go by the wayside, but not the playback performance/quality of
streaming in the LAN. Im not as concerned with WWW playback via Plex at
this point in time.
I've been mucking around with different RAID levels and was thinking of
going RAID 10, but some fast calculations tell me that I would only have
a little less then 9TB of storage if I go RAID 10. On the other hand if
I go RAID 5, i get 15TB of storage and still have protection for data
loss if I lose a drive, 2 with 6 disks.
Is that correct? Also is there enough of a performance boost of 10 over
5 to justify losing 50% of the total drive(s) capacity for data
retention for loss drives?
Keeping in mind just about everything on the server is replaceable, this
is not for a business, this is just personal use, but I would like a
layer of protection, thus the reason Im not looking at RAID 0 with high
performance, but zero data protection and if you lose one drive, you
lose 100% of the data. Thats just not worth it to me.
I am currently sitting on no less then 4.6TB of media storage, so while
9TB will basically double my current used space, 15TB would be longer
lasting before I had to increase storage amount.
Raid 10 is more sensible with 4 disks and you lose half the capacity. The reason to use Raid 10 is speed.
With Raid 5 you lose the size of one disk. Reading from Raid 5 can be quite fast, but writing is slower. When you have lots of disks, it becomes faster.
For a media server, the speed of the disks are usually not that important. It doesn't matter at all when playing movies, music and such. When copying to and from the server it matters, but there is usually a much bigger diffenece if you use gigabit ethernet cables instead of WIFI.
With Raid 6 you lose the size of 2 disks, but you can have 2 disks fail and it still works.
And be sure you set up some service that sends e-mail when a disk fail. If you use mdadm, it will send email to root@localhost. If you don't read that, forward it somewhere else.
thanks both of you. i was thinking between 5/10 due to my MB onboard raid support, but after some further research ill be investing in a raid card, or possibly creating a software raid via Cent...
please use software RAID . why ? just coz in order to report faulty drives , in HW should see BIOS reports( implies PC reset) , in software just interogate /proc/mdstats. SW raid allow to stop/remove/add drives online in a blink of an eye (more or less resync) , while some AMI or Phoenix BIOS_es RAID config pages are not so user friendly. Main down fall with SW raid is to create separate boot partitions or use separate cloned drives and/or cloned 512 B to preserve MBR just for redundancy boot sake !
on my raid 1 2x 1TB i do use 2 x 8 gb cloned mini kingston flash drives just for boot / boot partition redundancy,
If I were to run a raid, I still think I'd run a real true hardware raid. They are not cheap. The faux and software may be great for mirrors but if you really want speed, you have to use hardware. Now, as to you needing it...well... I don't know.
Some may argue that ZFS or Btrfs will provide the features but you'd have to set it up correctly. I was kind of shocked to see XFS come back to favor so you could easily run that on a hardware raid and be sure that the sales pitch on speed was met.
We still use hardware raid on almost every server for many features besides just speed.
im a bit skeptical of a software raid specifically as they run off the CPU and my CPU is rather well hammered with the F@H client.
You say it would run off the CPU, I simply put 2 x 1TB rated at 140 MB/s in a RAID 0 and seen no CPU load at all while running dd tests ,and it worked as supposed too , just about 280 MB/s write speed on software raid.
To be honest , how many successfully hardware raid did you alter in time in order to add/remove drives from a personality or another without compromising the entire personality itself ??? you name it ..Dell PERC 5 is waste of time, Dell Power Edge 1950 III simply broke down my brain synapses , asus bios looks great but the risk of reinitializing drives is huge thus loosing data , got sick of it over time and sticked to MDADM linux raid , runs great , no cpu loads,real time RAID monitoring , real time resync monitoring and so forth.
All the above were done by a middle specs machine like amd x4 640 3 Ghz stock ,with just 4 Gigs of 1333 RAM
True , i did not test it on 2 x ssd drives , so see whatever S Raid can handle speeds like 1 GB/s while stripping 2 SSD in raid 0 !
Sincerely ,
Gabriel linux-romania dot com
i really wish there were some hard comparisons of software raid from RHEL v7 vs typical hardware solutions. i looked into ZFS, but sadly I only have 8G RAM so I would have to either double the RAM or max it out, I dont know if my MB can take 32G RAM or not without more digging.
ZFS sounds like a great option, but not atm at least unless someone can convince me its well worth the $$$ for the increased RAM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.