LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Expand ext3 filesystem past 13TB... (https://www.linuxquestions.org/questions/linux-server-73/expand-ext3-filesystem-past-13tb-877459/)

hazoom 04-27-2011 03:31 PM

Expand ext3 filesystem past 13TB...
 
Hi all,

I have a 13TB ext3 filesystem (no partitions, just mounted with the whole drive /dev/sdb). I am trying to expand it. I added more drives to the Dell MD1000 and expanded it. And everything is ok up until I did a resize2fs. Here is what I did:

[root@sys ~]# pvresize /dev/sdb

[root@sys ~]# lvextend -l +417200 /dev/VolGroup01/LogVol00
Extending logical volume LogVol00 to 25.46 TB
Logical volume LogVol00 successfully resized

[root@sys ~]# resize2fs /dev/VolGroup01/LogVol00
resize2fs 1.39 (29-May-2006)
resize2fs: File too large while trying to determine filesystem size


And here is where the problem is. Any thoughts or advice? Thanks in advance!

-H

TB0ne 04-27-2011 03:50 PM

Quote:

Originally Posted by hazoom (Post 4338450)
Hi all,
I have a 13TB ext3 filesystem (no partitions, just mounted with the whole drive /dev/sdb). I am trying to expand it. I added more drives to the Dell MD1000 and expanded it. And everything is ok up until I did a resize2fs. Here is what I did:

[root@sys ~]# pvresize /dev/sdb
[root@sys ~]# lvextend -l +417200 /dev/VolGroup01/LogVol00
Extending logical volume LogVol00 to 25.46 TB
Logical volume LogVol00 successfully resized

[root@sys ~]# resize2fs /dev/VolGroup01/LogVol00
resize2fs 1.39 (29-May-2006)
resize2fs: File too large while trying to determine filesystem size

And here is where the problem is. Any thoughts or advice? Thanks in advance!

Would help if we knew what version/distro of Linux you're using, and what the size was before you added to it.

First, some earlier versions of RHEL5 had a bug related to this:
http://rhn.redhat.com/errata/RHBA-2009-1291.html

so if you're using RHEL5.x, and aren't paying for a subscription to the RedHat network, you wouldn't have gotten the patch/bugfix, so you're stuck. Also, what block size are you using? Maximum size for ext3 based on 4k blocks is 16TB...2k blocks is 8TB, and you'll also need size for the journal and other goodies. Usually ext4 or JFS are better choices for partitions of that size, not only for reasons such as this, but also for performance.

hazoom 04-27-2011 04:07 PM

I am running CentOS release 5.3 (Final). uname -a is:
Linux system.domain.com 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 2009 x86_64 x86_64 x86_64 GNU/Linux

Currently the size is 13TB. I cannot determine the block size b/c I get the following:

[root@sys ~]# tune2fs -l /dev/sdb |grep -i "block"
tune2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.
[root@sys ~]# dumpe2fs /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.

Here is my /etc/fstab entry:
/dev/VolGroup01/LogVol00 /backup/ ext3 defaults 0 0

Here is the fdisk output:

[root@sys ~]# fdisk -l /dev/sdb

WARNING: The size of this disk is 28.0 TB (27997818060800 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).


Disk /dev/sdb: 27997.8 GB, 27997818060800 bytes
255 heads, 63 sectors/track, 3403874 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
[root@sys ~]#


All times are GMT -5. The time now is 01:45 AM.