LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Why Am I Seeing Less Space on A Hard Disk After Formatting It? (https://www.linuxquestions.org/questions/linux-newbie-8/why-am-i-seeing-less-space-on-a-hard-disk-after-formatting-it-4175558609/)

JockVSJock 11-11-2015 01:07 PM

Why Am I Seeing Less Space on A Hard Disk After Formatting It?
 
I'm not sure where to start to look here or research this.

I added four new disks in VCenter for a VM, which is a RHEL Server

Disk 1: 500 GB
Disk 2: 1 TB
Disk 3: 1 TB
Disk 4: 1 TB

After addding them to the VM, power it on and check fdisk

Fdisk shows the following:

Code:


532 GB
1045 GB
1045 GB
1045 GB

This shows more space then what was allocated in VCenter.

I then proceed to create a partitions on each, and then from create LVMs on them (pvcreate, vgcreate and lvcreate) and use up all of the space.

I then format them with ext3.

Once I'm done, I run df -ha, which shows way less space. I'm wondering why? Is is that the space is eaten up by the Hypervisor and the OS? Or basically how does this work or how can I explain this back to someone else, such as a manager?

suicidaleggroll 11-11-2015 01:48 PM

How much less space?

fdisk and drive manufacturers uses GB (base 10), df uses GiB (base 2)
532 GB = 495 GiB
1045 GB = 973 GiB

Also, ext defaults to 5% reserved sectors for root. Add up your used space and available space on df and it should be ~5% less than the total space. If you want to change the amount of reserved space use tune2fs, eg on a system with 4kiB blocks:
Code:

tune2fs -r 20000
would change the reserved space from 5% of the total capacity to ~80 MB (that's usually what I use on non-system drives).

Between the two of those you're looking at around a 15% drop from the drive's advertised size versus the available space reported in df when it's empty. If you're seeing a difference bigger than that it could be something else.

jefro 11-11-2015 05:20 PM

I'd wonder why you are using ext3 too. Just curious.

JockVSJock 11-12-2015 08:15 AM

This is what df -ha displays.

Code:

 

/dev/mapper/VolGroup03-LogVol00
                      495G  198M  469G  1% /oraexp1
/dev/mapper/VolGroup04-LogVol00
                      987G  200M  936G  1% /oraexp2
/dev/mapper/VolGroup05-LogVol00
                      987G  200M  936G  1% /oraexp3
/dev/mapper/VolGroup06-LogVol00
                      987G  200M  936G  1% /oraexp4

However from vSphere, the disks shows

Code:

500 GB
1000 GB
1000 GB
1000 GB

So I'm not familiar with reserve space or sectors. Also didn't know that df used GB while fdisk used GiB. I'm not entirely familiar with those so I need to read up on that.

As for why we are using ext3, we are holding onto RHEL5 till the end.

suicidaleggroll 11-12-2015 08:40 AM

Looks pretty standard

469 / 495 = .947, so there's your 5 reserved sectors, and you can see the mapping between GB and GiB in my post above which accounts for the drop from 532 to 495.

If you want to convert yourself, just take the advertised size from the drive manufacturer or fdisk in GB, and multiply by (1000/1024)^3 to convert to GiB (what df reports).

jpollard 11-12-2015 11:16 AM

Well - there are other overheads.

You show that you are using LVM - which imposes some overhead for tracking volumes (not certain of the actual amount, but I think it around 1MB - depending on how large things are... it can grow as more disks are added to a logical volume).

Using Ext3/4 also has overheads - boot blocks, volume headers, backup headers, inode list (a block per inode), free block lists, block cluster management... When a file gets created some blocks (when the file is larger than a minimum) will hold metadata to identify other blocks where the data is - thus more overhead.

If this is layered on top of a raid, there is the raid configuration, error management/recovery overhead (more blocks used to identify the configuration, backups of the configuration,... When a logical volume includes multiple devices, the logical volume adds some overhead to be able to reconstruct the volume at boot time. The same applies to raid devices..

So there is overhead for using a raid
plus overhead for using logical volumes
plus overhead for the filesystem itself.

In addition, there is the easy confusion between a vendor selling a disk (anything that makes it look bigger is good - so using base 10 GB makes the disk look bigger even though everything with the disk uses base 2... hence the Gib reference for 1024 -> 1Kib. 1024 Kib -> 1Mib, and 1024 Mib -> 1 Gib all make disks look smaller).

JockVSJock 11-14-2015 09:18 PM

Unfortunately I'm not to terribly familiar with tune2fs, GB (Base 10) and GiB (Base 2), so I have more reading to do.

thanks again

Emerson 11-14-2015 10:07 PM

The correct units are KiB (not Kib) and kB (not KB or Kb), see more here: http://physics.nist.gov/cuu/Units/binary.html

suicidaleggroll 11-15-2015 08:11 AM

Quote:

Originally Posted by Emerson (Post 5449811)
The correct units are KiB (not Kib)

KiB = kibibyte
Kib = kibibit

Both are valid units.


All times are GMT -5. The time now is 01:27 PM.