LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 11-01-2017, 03:46 PM   #16
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,354
Blog Entries: 8

Rep: Reputation: 384Reputation: 384Reputation: 384Reputation: 384

I personally would not be comfortable with data that didn't have at least TWO independent backups. In other words, the data is at least triplicated (the live version, plus two independent full backups). That said, I do have a lot of stuff which is only duplicated which I could live without.

So - your limitation. Is it because you're maxed out on SATA interfaces? If so, then I'd look at the possibility of a clustered file system such as GlusterFS. The basic idea would be to do something like RAID1+0 except each RAID1 pair is replicated across two different computers. One can be a "fast" primary server while the other one is a "slow" fallback secondary, so the second computer doesn't need to be very expensive.

Such a clustered file system gives you the ability to maintain uptime even if one of the motherboards or power supplies fails. Performance would be as good as RAID0, assuming you've got good network bandwidth between the two file servers.

https://www.tecmint.com/introduction...os-and-fedora/

I don't have personal experience with this, though. My preference for my own purposes is actually RAID0 plus rsync backups rather than RAID mirroring. My data does not change much from day to day, and 24/7 uptime is not required by me. An rsync true backup saves me from an "oops" mistake which RAID1 would instantly propogate.

A drive failure in a RAID0 brings the file share down, of course, but it only takes me minutes to swap in a backup file server (swapping IP addresses is very fast, but it takes time to go and remount nfs shares to clean up stale handles).
 
Old 11-01-2017, 05:05 PM   #17
gda
Member
 
Registered: Oct 2015
Posts: 119

Original Poster
Rep: Reputation: 25
My limitation comes mainly from the free raw space I have available. To implement a solution like the one you are proposing I would need a total raw space doubled with respect to the actual space I need. So in my case for 40TB I need actually 80TB of raw space which I don't have... RAID6 offers more space efficiency but of course lower performance...

Moreover, in my current configuration I have a SAN which maps raw volumes (FC) to application servers. According to this setup the RAID (whatever type) is supposed to be done at SAN level rather than at server level. Of course I could set two identical RAID0 volumes and map them to two different servers but in this case I think I would underuse a bit the SAN capabilities...

To me your approach makes a lot of sense in case the RAID is managed at server level (with hard drives installed on the server as well). Is your setup like that? Or did I miss something?

Finally about the two independent backups this would be really great but unfortunately at moment is definitely behind my possibilities... so I have to stick with only one full backup...
 
Old 11-01-2017, 05:45 PM   #18
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian 9 Stretch
Posts: 2,354
Blog Entries: 8

Rep: Reputation: 384Reputation: 384Reputation: 384Reputation: 384
I was just taking a guess at what your limitation was from. It seemed odd to me that you were testing with a SAN but turning off the caches in order to make a ??? more accurate test? I was just guessing that maybe you were trying to simulate some sort of non-SAN storage system for the "true" eventual 40TB store.

My current setup actually has only one RAID left (a RAID0). I've migrated everything else to ordinary ext4 partitions on individual drives rather than consolidating multiple drives into RAID arrays (or LVM or other methods of combining them). This has involved a certain amount of manual juggling things around, but the big benefits for my purposes are:

1) A drive failure only results in a small loss. All data is at least duplicated, so a single drive failure at a time does not involve any data loss.

2) If a drive fails, it's relatively small so it doesn't take long to copy the backup to whatever other free space is available. It doesn't take too long to copy that data onto a replacement drive when it comes in.

3) All of my most frequently accessed files are on a single spinning drive. Every other spinning drive can be spun down most of the time, reducing power/heat/noise. Most of the time, there are only two drives spinning - my main file server's main storage drive, and one laptop which has its OS on a spinning hard drive. (All others have RAMBOOT, or nfs root, or SSD, or USB thumbdrive for OS.)

These benefits work because of the particular usage I have. Like I said, it's mostly unchanging data.
 
Old 11-01-2017, 06:07 PM   #19
gda
Member
 
Registered: Oct 2015
Posts: 119

Original Poster
Rep: Reputation: 25
Sorry my fault. I was not clear enough on that point. I tested with the cache turned off just because I wanted to be sure to perform the tests on XFS and EXT4 under exactly the same conditions. With the cache turned on it may happen for example that the cache gets full at certain unpredictable point while the tests are still on going...

Anyway thanks to have explained better your setup. Now it is more clear to me... really interesting...
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Linux 3.11 File-System Performance: EXT4, Btrfs, XFS, F2FS LXer Syndicated Linux News 0 07-20-2013 09:42 PM
LXer: Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10 LXer Syndicated Linux News 0 05-18-2013 07:10 AM
LINUX Server not starting up. xfs filesystem problem? xaib Linux - Server 5 08-20-2010 05:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 04:32 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration