LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices

Reply
 
Search this Thread
Old 09-23-2009, 10:52 AM   #1
muad'dib
LQ Newbie
 
Registered: Sep 2009
Posts: 14

Rep: Reputation: 0
Question [SOLVED] trouble expanding 4+TB GPT partition after expanding hardware RAID-5 volume


Hi all -

I have an Areca 1220 HW raid controller with 5 1.5 TB drives connected (eventually it will have 8).

The controller supports online capacity expansion.

When I first created the array I created it with 4 drives, RAID-5, GPT. Obviously it's not being used as a boot device.

So yesterday I added a new 1.5TB drive, and ran the procedure to exppand the raid.

So far so good. The array is now 6 TB and shows up as such from Linux.

The problem occurs when I try to grow the single etx3 partition to fill up the space. gparted runs for hours (most of that time checking the file system). But it comes back with an error and it can't grow the filesystem.

What am I forgetting to do? Surely this must be possible.

Thanks in advance for any suggestions.

this is the output from gparted:
*********************************

GParted 0.4.4

Libparted 1.8.8
Grow /dev/sdb1 from 4.09 TiB to 5.46 TiB 02:37:26 ( ERROR )

calibrate /dev/sdb1 00:00:00 ( SUCCESS )

path: /dev/sdb1
start: 34
end: 8789049044
size: 8789049011 (4.09 TiB)
check file system on /dev/sdb1 for errors and (if possible) fix them 01:18:39 ( SUCCESS )

e2fsck -f -y -v /dev/sdb1

Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

16252 inodes used (0.01%)
5227 non-contiguous inodes (32.2%)
# of inodes with ind/dind/tind blocks: 9836/7428/17
421344823 blocks used (38.35%)
0 bad blocks
36 large files

15184 regular files
1059 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
--------
16243 files
e2fsck 1.41.3 (12-Oct-2008)
grow partition from 4.09 TiB to 5.46 TiB 00:00:00 ( ERROR )

old start: 34
old end: 8789049044
old size: 8789049011 (4.09 TiB)
check file system on /dev/sdb1 for errors and (if possible) fix them 01:18:47 ( SUCCESS )

e2fsck -f -y -v /dev/sdb1

Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

16252 inodes used (0.01%)
5227 non-contiguous inodes (32.2%)
# of inodes with ind/dind/tind blocks: 9836/7428/17
421344823 blocks used (38.35%)
0 bad blocks
36 large files

15184 regular files
1059 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
--------
16243 files
e2fsck 1.41.3 (12-Oct-2008)
grow file system to fill the partition 00:00:00 ( SUCCESS )

resize2fs /dev/sdb1

resize2fs 1.41.3 (12-Oct-2008)
The filesystem is already 1098631126 blocks long. Nothing to do!

libparted messages ( INFO )

Unable to satisfy all constraints on the partition.

========================================

Screenshots of gparted just in case:

http://picasaweb.google.com/kclark56...85959227608034

http://picasaweb.google.com/kclark56...85982344618130

Last edited by muad'dib; 09-26-2009 at 04:57 PM.
 
Old 09-23-2009, 09:19 PM   #2
muad'dib
LQ Newbie
 
Registered: Sep 2009
Posts: 14

Original Poster
Rep: Reputation: 0
Update - I tried parted from the command line and got this message "Error: File system has an incompatible feature enabled"

[root@localhost amahi]# parted /dev/sdb1
GNU Parted 1.8.8
Using /dev/sdb1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resize 1 0 6000
Error: File system has an incompatible feature enabled.
(parted) quit
[root@localhost amahi]#


Can someone please help troubleshoot this?

I searched for this error with Fedora 10 and haven't found a single definitive answer anywhere.

Did I screw the pooch by choosing ext3?
 
Old 09-24-2009, 01:21 AM   #3
John VV
Guru
 
Registered: Aug 2005
Posts: 12,913

Rep: Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715
First I would not but Fedora 10 on a server.
Fedora 10 will HIT END OF LIFE in about 60 days ( 2 months) .
After that THERE WILL BE NO SECURITY UPDATES


and if you install fedora 11 it will hit EoL in 8 months

Please install a LONG life distro like RHEL ( CentOS ) ,Debain,SUSE . They have a 5 year life span.
 
Old 09-24-2009, 10:06 PM   #4
muad'dib
LQ Newbie
 
Registered: Sep 2009
Posts: 14

Original Poster
Rep: Reputation: 0
I noticed the "resize_inode" feature was on the ext3 filesystem. I tried removing it since there are reports that resize_inode caused problems with parted. But "tune2fs -O resize_inode" just gave me another error.

There is a "MSFTRES" flag on the partition. According to google thats often put there as part of a bug in gparted...?

So what about XFS - anyone?

Would I likely have better luck (growing partitions as the raid expands) with XFS?

As to Fedora - it's a requirement, not a choice. End of story.
 
Old 09-24-2009, 11:22 PM   #5
John VV
Guru
 
Registered: Aug 2005
Posts: 12,913

Rep: Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715Reputation: 1715
I don't use a raid set up so i can not help.
Quote:
Did I screw the pooch by choosing ext3
no , but you might want to look into LVM
it is a pain in the" you know what" to use ,until you master the format .

LVM is the default set up for the install dvd .
 
Old 09-26-2009, 11:49 AM   #6
muad'dib
LQ Newbie
 
Registered: Sep 2009
Posts: 14

Original Poster
Rep: Reputation: 0
OK, here's what seems to be the problem, tell me if LVM can help. If it can, I'll be all over it.

When I try to do anything at all with the new space created by expanding the raid, I get this message.

"Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 2929687040 blocks) or continue with the current setting?"

So this makes sense to me. When the system wrote the GPT, the disk was only 4.6TB, now it's 6 TB, leaving 1.5 TB that's not allocated in the partition table.

If I could non-destructively re-write the partition table, the problem would be solved. Is that even possible?

As it stands now though I can't do anything with that new 1.5 TB of space. I can't even create a new 1.5 TB partition for it.

So will LVM help with this?

How do people manage raid expansion anyway? Do you always have to create a new raid volume with the new space to get around this?

I've heard people talk about using XFS "without a partition" or using LVM without a partition but how?

And would it even work considering this volume has about 2 TB of data?

So far I have to wonder "what good is it to have the ability to expand a raid volume if you can't ever use the extra space without re-initializing the entire volume?

Am I missing something?
 
Old 09-26-2009, 05:01 PM   #7
muad'dib
LQ Newbie
 
Registered: Sep 2009
Posts: 14

Original Poster
Rep: Reputation: 0
Heh heh LVM sounds awesome. I definately need to read up on it.

For now though I *finally* solved this.

Finally. Such a pain but I got it.

The magic trick was to just type "#parted /dev/sdb" I'd always been running it like "#parted /dev/sdb1"

When I ran it against "/dev/sdb" and did a print command it gave me the "do you want to fix the GPT?" - I had to tell it twice to fix the GPT and that was that.

Then I was able to install the XFS tools for Fedora, then use Gparted to make an XFS partition taking up just the new space, then used Gparted to combine the space into one. I'm impressed with how fast that went compared to doing anything with ext3.

For anyone that wants to use XFS, here is how to get the tools for Fedora.

First add the RPM Fusion repositories. I used the instructions for the command line, here :

http://www.howtoforge.com/the-perfec...p-fedora-10-p3

Then I was able to use the add/remove software app to go download the xfsprogs rpm and dependencies.

The data is migrating to the new XFS partition now. Finally! At least now when I fill this up and can afford to add more drives it will be a no-brainer. When all it said and done there will be all the data on a 6TB partition.

Oh well, it was a learning experience. Nice to know that GOT tables really can be nondesctructivly enlarged in Linux. I knew I had to be doing *something* wrong...
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
expanding RAID 1 online? anon0 Linux - Server 0 02-03-2009 06:25 PM
Disk space after expanding volume Corrado Linux - General 1 05-07-2007 02:16 PM
Expanding hardware raid julian23 Linux - General 3 11-30-2006 10:51 AM
Expanding a Raid 1 crofty13 Linux - Software 4 10-15-2006 03:51 AM
Help Expanding RAID 4 or 5? bydgoszcz Linux - General 0 06-08-2006 03:23 PM


All times are GMT -5. The time now is 01:46 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration