LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 10-21-2015, 07:05 PM   #1
JoseCuervo
Member
 
Registered: May 2007
Location: North Carolina
Distribution: RHEL 7, CentOS7
Posts: 82

Rep: Reputation: 18
RaidZ2 or mdadm + Fedora22 + hot-swappable drives + ZFS vs XFS vs EXT4


Hey guys, we have here an... OPINION QUESTION!!

The title says it all, I know my options but I don't have any convictions yet from a lack of experience. I'm redoing my home storage from scratch and want to put my time into planning instead of fixing mistakes I could have avoided.

Goal: centralized NAS with maximum uptime and convenience of maintenance.
NAS will provide place for backups of apartment devices, media storage (1.2TB and growing), and as an NFS datastore mount on a home ESXi hypervisor because I haven't switched fully to an openstack hypervisor yet. Also, the ability to grow the size of the RAID volume by incrementally installing larger hard drives. I don't actually know if this is possible and I've read WAY too many confusing, conflicting anecdotes.

Materials:
Hot-swappable drive bay with SATA backplane for 5 x 3.5" drives (link)
5 x 2TB hard drives (mixture of eBay wins)
Mid-tower computer (380W Antec EarthWatt PSU, G3258 Pentium CPU, 16GiB DDR3, 1 x 16GiB USB drive for OS)

My thoughts (AKA the part you argue with):
I can put 5 x 3.5" 2TB drives into the hot-swap bay and create a RAID6 volume of 6TB. I can put any file system I want on top of that and create LUNs or files for export all day long. Or, I can install ZFS and create a RaidZ2 pool of 6TB which will basically work the same. Either way I want all five drives to be hot-swappable with double parity. That should prevent most predictable downtime for hardware failure of drives.

The whole system is behind a solid battery backup. That's part of the reason why I don't think my software RAID on a dedicated system isn't going to be inferior to a decent RAID card.

The Fedora22 install on the 16GiB USB is going to be installed and configured then dd'd into an image file, dd'd back onto identical flash drives, and lastly the image file will be uploaded for safe keeping. I'm not worried about protecting that USB install very much with hardware redundancy because of how easy it is to plug another one in.

I haven't bought fibre channel cards yet but I will be exporting a bunch of LUNs and NFS shares off this host. I'll be using Fedora22 server install since I'm fairly attached to it as an OS which means I'll be exporting over ethernet for now and absorbing whatever overhead that might entail.

OPINION SECTION!!
What am I doing wrong? Is ZFS the clear choice here? RaidZ2 vs RAID6 under ZFS? RAID6 & a more stable but less featured filesystem? Hell, does anyone swear up and down that I should be using btrfs?

Do any of you stick to RAID10 as the be-all of resiliency? I know the double-parity stripe will sacrifice slight performance, especially during writes, but it's going to save me 2TB over RAID10.

Secondly: how can I keep the drives spun down when they're not being accessed? Is there a way to have the drives park or does the RAID volume get in the way? I'd like to not have all drives going crazy 24/7.

Finally: fill in the blank. Now is a good time to mention $(new thing OP doesn't know about)

Thanks all!
 
Old 10-23-2015, 09:45 PM   #2
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
used hardware raid cards are cheap and reliable. don't knock yourself out on it
 
Old 10-23-2015, 09:49 PM   #3
Emerson
LQ Sage
 
Registered: Nov 2004
Location: Saint Amant, Acadiana
Distribution: Gentoo ~amd64
Posts: 7,661

Rep: Reputation: Disabled
Quote:
Originally Posted by debguy View Post
used hardware raid cards are cheap and reliable. don't knock yourself out on it
What ?!? You do not want to go hardware RAID unless you have 10 or more disks.
 
Old 10-23-2015, 09:53 PM   #4
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
SATA is far more powerful than ide, due to theft of SCSI designs.

IDE was "integrated device electronics" to be cheap (it wasn't integrated, that was a lie, the initially came on things like SIG add-on cards: but the ide drives themselves had no controller: the DRIVES were cheaper and the add-on card had to have an ide controller built-in. not integrated or missing: neither was true!) they (microsoft lotus intel) never did want to pay for scsi patents so they upgraded ide with features "like scsi but meant to avoid patent conflicts", which by 2000 became the scsi like SATA.

SCSI is still a little ahead in speed and utility and more scalable (and there are more hardwares to choose from, and things like scsi controller monitors? are their sata controller hardware monitors? likely no.)

SATA is no doubt a fast and useful system
 
Old 10-23-2015, 09:56 PM   #5
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
reply: ZFS etc etc ...

search the internet specifically the "bugzilla for zfs". see if there are many active bugs especially any complaints of corruption. if there are: run the other way.
 
Old 10-23-2015, 09:59 PM   #6
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
NFS is an old alternative to "FTP'ing" files.

it is not safe in any manner for mirroring.

it is ONLY SAFE when an experience user uses it to share files knowing full well at any time a file might be missing or corrupt: and that NFS never checks

period
 
Old 10-23-2015, 10:01 PM   #7
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
using SCSI and RAID LEVEL 2, any drive could go bad and the system stay safe and running

raid level 3 (software or otherwise) was actaully CHEAP because if one drive failed it could mean all drives were ruined

"thanks for the upgrade ... no thanks"

---------- Post added 10-23-15 at 11:02 PM ----------

> Secondly: how can I keep the drives spun down when they're not being accessed? Is there a way to have the drives park or does the RAID volume get in the way? I'd like to not have all drives going crazy 24/7.

that should be a drive setting, try looking into

$ hdparm --help

(NOT WHILE madm is or raid is active goodness - one wrong move and the software clobbers data)

Last edited by debguy; 10-23-2015 at 10:03 PM.
 
Old 10-23-2015, 10:05 PM   #8
debguy
Member
 
Registered: Oct 2014
Location: U.S.A.
Distribution: mixed, mostly debian slackare today
Posts: 207

Rep: Reputation: 19
new thing? be a gov worker making $100k + using tax money to work at a "data center" with your friends, all wearing IT keys around your neck

hell on taxpayers but allot of fun!
 
Old 11-13-2015, 02:47 PM   #9
JoseCuervo
Member
 
Registered: May 2007
Location: North Carolina
Distribution: RHEL 7, CentOS7
Posts: 82

Original Poster
Rep: Reputation: 18
Hey guys, thanks for the answers I got. I guess this didn't provoke the criticism I'm used to.

I didn't see a lot of actual discussion about what I should be doing so I'm just going to update on the progress I've made so far and my choices. If someone finds them useful in the future I'll be glad, and if someone wants to improve on my choices you can let me know.

Software vs Hardware Raid
I'm avoiding hardware Raid at this point. I have few disks and won't see much improvement. For the initial reasons I listed I'll stick with software Raid, and I don't expect to see a performance or robustness decrease. In fact, I expect a much more convenient experience. Without the risk of a hardware Raid card dying I can much more easily move my hard drives into another machine, or re-install my filer OS (I went with Fedora 22) and not have to worry about compatability or device firmware.

ZFS
I settled on ZFS because of the tight integration between filesystem, formatting, and devices. Using zpools to manage my devices is the easiest solution I've seen. Putting 5 x 2TB hard drives in a RaidZ2 (Raid6 with ZFS on top of it) configuration gives me good performance with good robustness. I'm copying the data onto a backup hard drive on the network, and also onto a 4TB drive that sits in a garage across town. Low effort, high safety. I eventually decided against btrfs because the potential benefits it offers won't affect my use case, and may not be on a timeline that's easily predictable anyway. The install I have now doesn't need to be updated and fills all my needs.

Hardware & config
I settled on the drive bay I mentioned previously, but I was convinced by a friend to pick up some cheap Qlogic fibre cards on eBay. They were $7 each, so it was hard to argue price. Now I can export my ZFS LUNs over fibre at 4Gb/s instead of over ethernet. I'm having trouble getting ESXi to recognize them, and I don't know if it's a card issue or if it's an ESXi issue. I'm worried the hypervisor is being picky about where they are on the motherboard. I'm planning on updating the cards' firmware and trying again. I'll update this thread either way. And, the fibre cards officially mean I'm building a SAN and not a NAS. At least I'm learning some new terms.

Final goals: I'll do some benchmarks later for performance and stability.
 
Old 11-14-2015, 05:21 PM   #10
JoseCuervo
Member
 
Registered: May 2007
Location: North Carolina
Distribution: RHEL 7, CentOS7
Posts: 82

Original Poster
Rep: Reputation: 18
Update 1:

I created a blank dos boot disk on a 16GiB USB 3.0 drive (used 'Rufus' tool on Windows) and then unpacked the firmware for my cards (from Qlogic website for ISP2432) onto the drive. I booted the drive and ran update.bat and it auto-detected the cards and upgraded the firmware. Then, I accessed the cards themselves and disabled the BIOS on each port (the internet told me that if I wasn't booting over SAN I could save some device memory by disabling and it hasn't broken anything yet).

2 out of 3 cards worked fine, the third has not been detected on any computer or port so I'm returning to eBay. Once the firmware was upgraded they were properly detected on ESXi 6 and Fedora 22.

More updates to come as I work on this project.
 
Old 11-16-2015, 06:31 PM   #11
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I'm curious as to why you'd pick Fedora; a leading edge/bleeding edge, high turnover R&D distro for RHEL/Centos?
Why wouldn't you go for the stability option of RHEL/Centos, for what it seems is going to be a key server when its done?
 
Old 11-17-2015, 07:22 PM   #12
JoseCuervo
Member
 
Registered: May 2007
Location: North Carolina
Distribution: RHEL 7, CentOS7
Posts: 82

Original Poster
Rep: Reputation: 18
Good question Chris, and it's only because I have a friend that used a similar setup (QLogic HBAs and fibre to push datastore luns over to an ESXi host). Obviously I'm adding space for media, but the tech parts are the same. She has tested the setup extensively and decided, conclusively, that no version of Debian or CentOS supports this yet. Her exact quote in the notes I'm following say:
Failed OS’s
Debian 7
Debian 8.1
CentOS 6.x
CentOS 7.x
RHEL 7.x

Later found that RPM based distros do not currently support the feature of exporting these types of volumes via FC, though CentOS 7 would support FCoE exports. The limitation is due to the distros aiming on stability and thus sacrificing this newer system. Debian versions failed to load the “QLINI” option to the kernel, which resulted in the exporting of volumes via HBA’s failing.
She works on a different IT team at Red Hat, so I'm not going to second guess her results. I'm pulling ahead of her in pure ZFS experience, but I'm brand new to fibre and volume exporting. In fact, I'm pretty new to the whole idea of storage being used aggressively. She says Fedora 22 is the best option for exporting FC volumes like this currently, and I'll take it at face value.

Although, all my production machines are running light Cent7 or Debian Jessie installs right now for stability. I know it sounds like a serious machine/project but it's not. The data is backed up in triplicate, all my VMs are still running fine, and the wife is using Netflix while the media is offline. I just want to try something and do it right. If it fails, I scrap it and go back to a simple JBOD samba share. If it works I turn it on and forget about it for a decade

Cheers,

Jose
 
Old 11-18-2015, 09:21 PM   #13
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Ok that's a good reply.
FWIW F23 is out; maybe the newer version will work better.
Good luck.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Persistent drive naming/ID for hot swappable SATA drives cilbuper Linux - Hardware 7 12-05-2012 06:35 AM
eSata-- hot-swappable?? colbert Linux - Hardware 7 10-30-2008 12:47 PM
hot swappable esata drives cygnus-x1 Linux - Hardware 2 01-29-2008 11:41 AM
Hot-swappable SATA Drives in RAID 5 on Linux? Synesthesia Linux - Hardware 11 01-06-2006 06:14 PM
Hot Swappable USB devices vanwas Linux - Hardware 1 08-27-2004 10:50 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 02:44 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration