LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

Reply
 
Search this Thread
Old 03-31-2013, 10:04 PM   #1
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Rep: Reputation: 33
Question Need suggestion on filesystem to replace ext4


With the recent thread about filesystem of choice, I am looking to replace my current ext4. Here is what I need:
1. Needs to be stable and reliable.
2. Have appropriate admin tools to correct/recover the filesystem.
3. Needs to able to expand the filesystem quickly online.
4. Needs to support filesystem shrinking. This can be offline but should take less than an hour for very large filesystems.
5. Works well with dm-raid and LUKS or has a suitable replacement with 2 disks parity.
6. Ability to convert from ext4 (this is optional).

I really like the ext[2-4] filesystems and this is the only filesystem I have used on Linux. However, I have recently hit a limit with ext4 trying to shrink at 14TB filesystem. The shrink ran for 7 days and never got pass what looked like 10% (or 5 dots in the list of dots). I need to shrink the filesystem so I can reduce the number disk in the RAID array.

What I'm trying to accomplish is to reducing the number of disk in the RAID so I can start a new RAID with larger disks. I have limited amount of SATA ports and I cannot replace all the disk in the current RAID at the moment. This will be an ongoing requirement as I exhaust the capacity of each RAID array.

I have been looking at btrfs, but it seems to have a long way to go before being considered stable.
 
Old 04-01-2013, 06:36 AM   #2
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 755

Rep: Reputation: 226Reputation: 226Reputation: 226
Sounds like what you really need is LVM in between the RAID volume and the filesystem.

If you had that then your current situation would be to :-
1. Replace each HDD in turn allowing the RAID array to rebuild on each drive
2. Once all the drives have been replaced add another RAID volume in the free space that is now available
3. Extend the LVM system over both the old and new RAID arrays
4. Expand the file system over the extra space

Pure software RAID in Linux is also capable of growing to increase the RAID size by either switching out the disks for larger ones or adding extra disks, without the need for LVM.
 
Old 04-01-2013, 07:23 AM   #3
jtsn
Member
 
Registered: Sep 2011
Location: Europe
Distribution: Slackware
Posts: 803

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
Quote:
Originally Posted by tux_dude View Post
I really like the ext[2-4] filesystems and this is the only filesystem I have used on Linux. However, I have recently hit a limit with ext4 trying to shrink at 14TB filesystem. The shrink ran for 7 days and never got pass what looked like 10% (or 5 dots in the list of dots). I need to shrink the filesystem so I can reduce the number disk in the RAID array.
This is what you can expect from a RAID based on parity: You sacrifice write performance for additional space with reliability. Shrinking a file system is write intensive, so you'll have to wait.

Quote:
I have been looking at btrfs, but it seems to have a long way to go before being considered stable.
Your problem will be same with any filesystem. Your storage solution is too slow for its size. How long does a rebuild take after replacing a disk?
 
Old 04-01-2013, 01:06 PM   #4
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
Quote:
Originally Posted by wildwizard View Post
Sounds like what you really need is LVM in between the RAID volume and the filesystem.

If you had that then your current situation would be to :-
1. Replace each HDD in turn allowing the RAID array to rebuild on each drive
2. Once all the drives have been replaced add another RAID volume in the free space that is now available
3. Extend the LVM system over both the old and new RAID arrays
4. Expand the file system over the extra space

Pure software RAID in Linux is also capable of growing to increase the RAID size by either switching out the disks for larger ones or adding extra disks, without the need for LVM.
I did look at and considered LVM before I started my build. Since I wanted one large filesystem, LVM did not add any value to my setup. My issue is not with expanding the array or filesystem. In fact, not having LVM has reduce the expansion process to 3 steps (I don't need to do step 3).

My issue is with shrinking the array. Eg, I had a disk failure a few days ago and want to purchase a larger hard drive (currently using 1.5TB drives in this array) and utilized the entire space of the new drive. I am unable to replace all the drives in the array. My solution is to shrink the array for the current 12 disk RAID6 to an 11 disk RAID6. Then I could add a 3TB hard drive a start a new array. I want to be able to rinse/repeat this process as my 1.5TB drives fail.
 
Old 04-01-2013, 01:25 PM   #5
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
Quote:
Originally Posted by jtsn View Post
This is what you can expect from a RAID based on parity: You sacrifice write performance for additional space with reliability. Shrinking a file system is write intensive, so you'll have to wait.
Interesting. Never thought this could be such a huge performance impact. If my calculation is correct, copying all 14TB across the 1G LAN would not take 7 days. I understand the all the R/W are occurring on the same RAID, but this is ridiculously slow. Another thing during the shrinking was the status kept starting over, ie it would go 1, 2 dots, then start over from 1 again. That repeated for a few hours before going to the third dot. It would cycle from 1 to 3 repeatedly for over a day before going to 4, eventually taking 7 days to get to the fifth dot.


Quote:
Originally Posted by jtsn View Post
Your problem will be same with any filesystem. Your storage solution is too slow for its size. How long does a rebuild take after replacing a disk?
Rebuilds are relatively fast. A failed disk replacement takes about 4hrs if the array is not in use (about 12hrs when in use). A disk expansion takes about 24hr in use (never done a disk expansion offline).
 
Old 04-02-2013, 04:22 AM   #6
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 755

Rep: Reputation: 226Reputation: 226Reputation: 226
Quote:
Originally Posted by tux_dude View Post
My solution is to shrink the array for the current 12 disk RAID6 to an 11 disk RAID6. Then I could add a 3TB hard drive a start a new array. I want to be able to rinse/repeat this process as my 1.5TB drives fail.
Not sure how that could work.

You will have a single drive RAID? How is that going to work?
 
Old 04-02-2013, 08:27 AM   #7
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
Quote:
Originally Posted by wildwizard View Post
Not sure how that could work.

You will have a single drive RAID? How is that going to work?
**Edit**. I reread this question. Yes, conceptually, I will have a single disk RAID in degraded RAID1. Then 2 disks and rebuild my RAID1. Then 3 disks and migrate to RAID5. Then 4, then 5, then 6, then 8 and migrate to RAID6, etc, etc.

However, I won't have a single RAID volume when upgrade to larger disks. There will be two, the old RAID6 with 1.5TB drives which will be replaced by a new RAID6 with 3TB drives as the 1.5TB drives fail (note this could take months or even years). While I could use your suggestion to span the filesystem across both RAIDs, I will still have the shrinking problem when the older disks starts to fail.

Or I may have misunderstood what you wrote. So let's throw LVM in the mix. If I had 1 large LV spanned across two RAID6 volumes, lets call them RAID6a (with 1.5TB disks) and RAID6b (with 3TB disks). If there is a drive failure in RAID6a, will I be able to reduce the size of RAID6a to n-1 disks and increase the size for RAID6b to m+1 disk and then grow the LV to x+1.5TB without affecting the data? I want to be able to repeat this over time as the disks in RAID6a fails.

Last edited by tux_dude; 04-02-2013 at 08:42 AM.
 
Old 04-02-2013, 11:48 AM   #8
Mark Pettit
Member
 
Registered: Dec 2008
Location: Cape Town, South Africa
Distribution: Slackware 14.1 64 Multi-Lib
Posts: 429

Rep: Reputation: 124Reputation: 124
Would you be able to hire a server with space, or some sort of NAS device for a few days. Then copy the whole lot off, rebuild the array, and finally copy back again ? Will cost you some money, but surely not a fortune ! And how important is your data to you ?
 
Old 04-02-2013, 01:36 PM   #9
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
Mark,
Hiring another server is not an ideal option for me. I am trying to keep both cost and downtime to a minimum (actually, not factoring in the cost for my time, I want to keep the cost to that of the new hard disks). My data is important, but not invaluable.

I am able to get to the n-1 state today with some nifty disk juggling, but this takes considerable effort and time. The server has to remain offline during the entire operation which results in a very grumpy household. If I can solve the filesystem shrinking issue then downtime would be reduced to the time it takes to shrink the filesystem (assuming the shrinking is done offline). If I can do the filesystem shrinking overnight, this would be perfect.
 
Old 04-02-2013, 08:15 PM   #10
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.1
Posts: 1,494

Rep: Reputation: 437Reputation: 437Reputation: 437Reputation: 437Reputation: 437
Quote:
Originally Posted by tux_dude View Post
**Edit**. I reread this question. Yes, conceptually, I will have a single disk RAID in degraded RAID1. Then 2 disks and rebuild my RAID1. Then 3 disks and migrate to RAID5. Then 4, then 5, then 6, then 8 and migrate to RAID6, etc, etc.

However, I won't have a single RAID volume when upgrade to larger disks. There will be two, the old RAID6 with 1.5TB drives which will be replaced by a new RAID6 with 3TB drives as the 1.5TB drives fail (note this could take months or even years). While I could use your suggestion to span the filesystem across both RAIDs, I will still have the shrinking problem when the older disks starts to fail.

Or I may have misunderstood what you wrote. So let's throw LVM in the mix. If I had 1 large LV spanned across two RAID6 volumes, lets call them RAID6a (with 1.5TB disks) and RAID6b (with 3TB disks). If there is a drive failure in RAID6a, will I be able to reduce the size of RAID6a to n-1 disks and increase the size for RAID6b to m+1 disk and then grow the LV to x+1.5TB without affecting the data? I want to be able to repeat this over time as the disks in RAID6a fails.
Well, what's your goal with your RAID setup?

If you don't want to lose data and don't mind a write penalty, then creating a series of independent RAID-1 arrays that turn into physical volumes in an LVM environment might do what what you really want. (Albeit at a 50% reduction of your maximum storage capacity.)
 
Old 04-02-2013, 11:38 PM   #11
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
Quote:
Originally Posted by Richard Cranium View Post
Well, what's your goal with your RAID setup?

If you don't want to lose data and don't mind a write penalty, then creating a series of independent RAID-1 arrays that turn into physical volumes in an LVM environment might do what what you really want. (Albeit at a 50% reduction of your maximum storage capacity.)
Another great idea. However, the reduced usable storage is too great.

Seems I'll have to wait until btrfs stabilizes. Now that RAID6 support has been added, it looks like a better option. Hopefully, it will be stable enough by the time 5TB drives are available for a reasonable price.
 
Old 04-03-2013, 01:52 AM   #12
jtsn
Member
 
Registered: Sep 2011
Location: Europe
Distribution: Slackware
Posts: 803

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
You could also try ZFS, it matured years ago.
 
Old 04-03-2013, 02:45 AM   #13
Mark Pettit
Member
 
Registered: Dec 2008
Location: Cape Town, South Africa
Distribution: Slackware 14.1 64 Multi-Lib
Posts: 429

Rep: Reputation: 124Reputation: 124
ZFS :-) According to recent articles, it has only just matured on Linux - version 0.6.1 I think ! But a distinct possibility nevertheless, especially if you have disks of different size.
 
Old 04-03-2013, 04:39 AM   #14
jtsn
Member
 
Registered: Sep 2011
Location: Europe
Distribution: Slackware
Posts: 803

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
Well, you are not restricted to Linux when considering ZFS. There are other OS options with a more mature support of this technology.
 
Old 04-03-2013, 08:32 AM   #15
tux_dude
Member
 
Registered: Dec 2008
Distribution: Slackware64 Current
Posts: 226

Original Poster
Rep: Reputation: 33
From my brief review on ZFS, my understanding is that its strength is in its data integrity, but it does not offer much flexibility with device management. Not being able to change the size of the ZFS pool is a huge disadvantage for me. If you can direct me on how to accomplish the challenge in post #7 with ZFS (replace LVM with ZFS and RAID6 with RAIDZ2) I will rip Linux off the NAS tonight.

Last edited by tux_dude; 04-03-2013 at 09:17 AM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem with ext4 filesystem farazinux Linux - Server 5 03-03-2013 07:57 AM
Will Debian 6 stable have ext4 filesystem? Mr. Alex Debian 10 12-14-2010 07:42 PM
support for ext4 filesystem info1686 Programming 2 06-30-2009 11:49 PM
LXer: The Ext4 Filesystem LXer Syndicated Linux News 1 12-25-2008 02:58 PM
LXer: Linux: ext4 Filesystem LXer Syndicated Linux News 0 06-30-2006 11:33 AM


All times are GMT -5. The time now is 09:34 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration