Why Am I Seeing Less Space on A Hard Disk After Formatting It?
I'm not sure where to start to look here or research this.
I added four new disks in VCenter for a VM, which is a RHEL Server Disk 1: 500 GB Disk 2: 1 TB Disk 3: 1 TB Disk 4: 1 TB After addding them to the VM, power it on and check fdisk Fdisk shows the following: Code:
I then proceed to create a partitions on each, and then from create LVMs on them (pvcreate, vgcreate and lvcreate) and use up all of the space. I then format them with ext3. Once I'm done, I run df -ha, which shows way less space. I'm wondering why? Is is that the space is eaten up by the Hypervisor and the OS? Or basically how does this work or how can I explain this back to someone else, such as a manager? |
How much less space?
fdisk and drive manufacturers uses GB (base 10), df uses GiB (base 2) 532 GB = 495 GiB 1045 GB = 973 GiB Also, ext defaults to 5% reserved sectors for root. Add up your used space and available space on df and it should be ~5% less than the total space. If you want to change the amount of reserved space use tune2fs, eg on a system with 4kiB blocks: Code:
tune2fs -r 20000 Between the two of those you're looking at around a 15% drop from the drive's advertised size versus the available space reported in df when it's empty. If you're seeing a difference bigger than that it could be something else. |
I'd wonder why you are using ext3 too. Just curious.
|
This is what df -ha displays.
Code:
Code:
500 GB As for why we are using ext3, we are holding onto RHEL5 till the end. |
Looks pretty standard
469 / 495 = .947, so there's your 5 reserved sectors, and you can see the mapping between GB and GiB in my post above which accounts for the drop from 532 to 495. If you want to convert yourself, just take the advertised size from the drive manufacturer or fdisk in GB, and multiply by (1000/1024)^3 to convert to GiB (what df reports). |
Well - there are other overheads.
You show that you are using LVM - which imposes some overhead for tracking volumes (not certain of the actual amount, but I think it around 1MB - depending on how large things are... it can grow as more disks are added to a logical volume). Using Ext3/4 also has overheads - boot blocks, volume headers, backup headers, inode list (a block per inode), free block lists, block cluster management... When a file gets created some blocks (when the file is larger than a minimum) will hold metadata to identify other blocks where the data is - thus more overhead. If this is layered on top of a raid, there is the raid configuration, error management/recovery overhead (more blocks used to identify the configuration, backups of the configuration,... When a logical volume includes multiple devices, the logical volume adds some overhead to be able to reconstruct the volume at boot time. The same applies to raid devices.. So there is overhead for using a raid plus overhead for using logical volumes plus overhead for the filesystem itself. In addition, there is the easy confusion between a vendor selling a disk (anything that makes it look bigger is good - so using base 10 GB makes the disk look bigger even though everything with the disk uses base 2... hence the Gib reference for 1024 -> 1Kib. 1024 Kib -> 1Mib, and 1024 Mib -> 1 Gib all make disks look smaller). |
Unfortunately I'm not to terribly familiar with tune2fs, GB (Base 10) and GiB (Base 2), so I have more reading to do.
thanks again |
The correct units are KiB (not Kib) and kB (not KB or Kb), see more here: http://physics.nist.gov/cuu/Units/binary.html
|
Quote:
Kib = kibibit Both are valid units. |
All times are GMT -5. The time now is 01:27 PM. |