LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat
User Name
Password
Red Hat This forum is for the discussion of Red Hat Linux.

Notices


Reply
  Search this Thread
Old 09-15-2013, 05:13 PM   #1
PeterSteele
Member
 
Registered: Jun 2012
Posts: 264

Rep: Reputation: Disabled
mkfs.ext4 on a 3.7TB partition creates a 1.6TB file system


We have a server running CentOS 6.4 with a 4TB drive partitioned as show below:
Code:
Model: ATA WDC WD4001FAEX-0 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  10.7GB  10.7GB  ext3                  boot
 2      10.7GB  3898GB  3888GB               primary
 3      3898GB  3950GB  51.2GB               primary
 4      3950GB  4001GB  51.2GB               primary
I'm formatting this drive as ext4:
Code:
# mkfs.ext4 -K /dev/sdb2
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
103071744 inodes, 412262144 blocks
20613107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
12582 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Instead of the expected 3.7TB or so, the size of my file system is only 1.5TB:
Code:
# mount /dev/sdb2 /data
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              9.9G  2.0G  7.5G  21% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdb2             1.6T  197M  1.5T   1% /data
The maximum ext4 file system though I believe is 16TB (assuming 1K = 1024). So what trick is needed to achieve this?

The problem does end there for us. What we really want to do is to use the raw partition as a virtual drive under a KVM based VM. The same limit is seen there though, before we even format it. We specify /dev/sdb2 as a virtual drive mapped to /dev/vdb, but when the virtual machine boots /dev/vdb is reported as being only 1689GB in size instead of the expected 3888GB in size.

It seems awfully suspicious that the maximum ext4 filesystem we can create with this partition is 1.5TB in size, and the same raw partition mapped to a virtual drive under a KVM VM also maxes out at 1.5TB.

Is there some obvious explanation here?
 
Old 09-15-2013, 05:36 PM   #2
PeterSteele
Member
 
Registered: Jun 2012
Posts: 264

Original Poster
Rep: Reputation: Disabled
Here's another piece of information: I ran the same mkfs command under CentOS Live and the file system appears to be the full size expected (well, 3.5TB reported by df, so that's close to max at least).

The CentOS version that's hitting the 1.5TB limit is based on CentOS Minimal with additions for virtualization (libvirt, etc), plus a few other packages. It's a minimal system with no UI--strictly ssh access. Could we be missing a piece in our minimal OS that limits the size of hard drives? I can't imagine what it would be but since both the ext4 volume as well as the VM volume max out at 1.5TB, there seems to be a possible connection here.

Any suggestions would be appreciated.
 
Old 09-16-2013, 08:38 AM   #3
PeterSteele
Member
 
Registered: Jun 2012
Posts: 264

Original Poster
Rep: Reputation: Disabled
I decided to start from scratch and created an unaltered CentOS Minimal installation, with no additions of any kind. I installed this OS on a USB stick and then used it to boot the server with the 4TB drive. I then proceeded to do a mkfs.ext4 on the same partition of the 4TB drive that I'd been using in my other tests, leaving the partitions exactly the same as they had been previously setup.

Unfortunately the 1.5TB limit did not occur in this case. The file system was the full 3.5TB that it should be. I say unfortunately because I was hoping to reproduce the error, validating my suspicions that the issue was somehow related to the CentOS Minimal install type.

The key thing that I've noted when doing the mkfs.ext4 is the block count that's reported. In my 1.5TB case, mkfs reports the number of blocks as 412262144 in the 1.5TB case but 949133056 blocks in the 3.5TB case. Curiously, both report a "Maximum filesystem blocks" count of 4294967296.

Ordinarily I don't pay a lot of attention to the values output by mkfs so I don't know what these differences mean. Plus, it doesn't solve our real problem of the partition size being limited when used as the source for a virtual drive.
 
Old 09-17-2013, 08:35 AM   #4
PeterSteele
Member
 
Registered: Jun 2012
Posts: 264

Original Poster
Rep: Reputation: Disabled
For those who are interested, the solution turned out to be somewhat unexpected. What I didn't explain was that partition 1 of this 4TB disk had been previously configured as a member of a RAID 1 array for the system's OS. On a 1U server with 4 drives, we carve out a slice of each drive to be a member of this array.

At the time this is done, it is not known exactly how the remaining space will be partitioned. Once this is determined, calls to parted are made to complete the partitioning. For some reason, when the drive is larger than some limit (I assume 2TB), odd things happen. The parted command sees the full size of the disk and has no issues partitioning it, but subsequent use of the partitions appear to impacted. Specifically, partitions past the 2TB mark on the disk are truncated.

However, if partition 1 of a drive is first removed from the RAID, then additional partitions created as needed, and partition 1 then added back to the RAID array, the full size of the newly created partitions are available, past the 2TB point of the disk. I'm not entirely clear why this is and it would be useful to understand what's going on here. I've search the web extensively and have found no one reporting a problem like this.

If anyone has an idea what causes this behavior, I'd appreciated some feedback.

Last edited by PeterSteele; 09-17-2013 at 08:46 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Resize Exended Partition with logical volume and ext4 file system joloboff Linux - Laptop and Netbook 2 03-18-2013 08:03 PM
recovering a 'ntfs' partition which got ruined by mkfs.ext4 command jagsir Linux - General 6 09-19-2010 07:53 AM
Shrinking broke Partition table. EXT4 file system. sysrez Linux - General 2 08-25-2010 11:07 AM
2.3. Creating a File System on the Partition (Bedside mke2fs there is mkfs.ext2/3) johncsl82 Linux From Scratch 2 05-23-2010 04:15 AM
How can I override the '(5.00%) reserved for the super user' mkfs.ext3 creates? guba04 Linux - Hardware 9 01-27-2008 12:20 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Red Hat

All times are GMT -5. The time now is 11:18 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration