16x2TB RAID. What Large File System Format to Use?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
What program is generating that message? I don't see it in your previous posts.
I'm not sure how LVM works, so I don't know if that would be able to work around this problem. From what I understand, LVM allows you to spread a single file system across multiple discs/partitions. If that's the case, you'd still have this problem when attempting to create the filesystem on it.
From what I've been reading, you can't increase the block size over 4k, and e2fsprogs is limited to 32-bit addressing with ext4, which means the limit is a hard 16TB, I don't see any way around it. From what I see, the only two mature linux filesystems that support >16TB are XFS and JFS. XFS only works on 64-bit, and JFS isn't supported on CentOS (at least not well).
I'm not sure what the best move here is. Would running two separate partitions/filesystems be possible given your usage?
Last edited by suicidaleggroll; 05-03-2012 at 02:24 PM.
Unfortunately it looks like XFS only works in 64 for some reason on CentOS. You could try the ext4 filesystem instead (still need the GPT partition table, but you should be able to use mkfs.ext4 /dev/sdb1 to switch from XFS to ext4).
Otherwise you'd have to switch to 64-bit, which means reinstalling the OS from scratch. If you post the output of "cat /proc/cpuinfo" we'll be able to see if the hardware supports it, but unless this is a brand new machine that was just set up, it will probably be more worthwhile to just use a different filesystem type.
I just discovered that my cpus can run 64-bit... probably should have confirmed that before installing the os. As I said before, this is a new server, so I haven't configured a lot, surely nothing I can't easily do again.
So it looks like I'm reinstalling the os. xfs better work after that...
If it's a new server, I would almost definitely just go back and re-install the 64-bit version. From what I see that should fix the XFS problem, the 32-bit kernel simply doesn't include an xfs module which is why it can't mount it.
So today I installed the CentOS 6.2 x86_64 version, and it was ridiculously simple to format the raid. Made changes to /etc/fstab and gave it a reboot to see if it still worked and just like that I have a 26TB xfs at my finger tips.
Thanks for all your help, suicidaleggroll.
Now does anyone have any insight into the dual fiber connection? It would be best to figure that out now before the file system starts being used.
OpenSolaris, Hardware Controller & RAIDZ3 for RAID spans
Each RAID array has benefits and drawbacks with what filesystem you are using. The questions you need to ask are follows... Is this system using software RAID or Hardware RAID. With Linux even if it is a small RAID such as 0, 1, 10, 5 I still would suggest a basic $50-$100 RAID card. Hardware RAID controllers work much better with any type of Linux. The other question is what is the size of the volume span and how many disks you are going to be using. For 2-5 disks a basic array ext3 and ext4 are okay solutions, I personally would never use them because of journaling. I would only use ext4 on no more than RAID 0 with 2 disks ever. XFS is a good option for anything 6+ disks. But you want to make sure you have a UPS for your array and a line conditioner is a good idea as well for such a large array. If you can afford such a large volume of fiber drives then these should not be a problem. My own home personal use I use OpenSolaris with RAIDZ3 and the LSI LSI00417 controller with an 8 drive RAID 50 span. So I only loose 25% parity. This card costs just under $600 but offers a dual core processor, 1GB 1866 ddr3, 8x PCI-E 3.0 (8 lanes are crucial to any array with more than 4 disks) and support for Linux, Solaris, FreeBSD, VMware and others. I have the HUS724040ALS640 drives. Hitachi has the lowest 3 year failure rate even lower than Western Digital and Seagate has an awful failure rate compared to the others. This is a SAS drive not fibre and is rated at 2.0M hour MTBF and has a 5 year warranty. I recently built this replacing an old system that had a whopping 3TB in RAID 50. It's nice to have 24TB now. It should last a good 5-6 years. FYI a good rule of thumb for any server is to have 1GB of RAM for every 1TB include parity in your calculation as well. I have 24TB and 8TB in parity, so I have 32GB of RAM 4x8GB in quad channel rather than 2x16GB in dual channel. I hope this helps.