[SOLVED] Why Am I Seeing Less Space on A Hard Disk After Formatting It?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Why Am I Seeing Less Space on A Hard Disk After Formatting It?
I'm not sure where to start to look here or research this.
I added four new disks in VCenter for a VM, which is a RHEL Server
Disk 1: 500 GB
Disk 2: 1 TB
Disk 3: 1 TB
Disk 4: 1 TB
After addding them to the VM, power it on and check fdisk
Fdisk shows the following:
Code:
532 GB
1045 GB
1045 GB
1045 GB
This shows more space then what was allocated in VCenter.
I then proceed to create a partitions on each, and then from create LVMs on them (pvcreate, vgcreate and lvcreate) and use up all of the space.
I then format them with ext3.
Once I'm done, I run df -ha, which shows way less space. I'm wondering why? Is is that the space is eaten up by the Hypervisor and the OS? Or basically how does this work or how can I explain this back to someone else, such as a manager?
Also, ext defaults to 5% reserved sectors for root. Add up your used space and available space on df and it should be ~5% less than the total space. If you want to change the amount of reserved space use tune2fs, eg on a system with 4kiB blocks:
Code:
tune2fs -r 20000
would change the reserved space from 5% of the total capacity to ~80 MB (that's usually what I use on non-system drives).
Between the two of those you're looking at around a 15% drop from the drive's advertised size versus the available space reported in df when it's empty. If you're seeing a difference bigger than that it could be something else.
Last edited by suicidaleggroll; 11-11-2015 at 01:52 PM.
So I'm not familiar with reserve space or sectors. Also didn't know that df used GB while fdisk used GiB. I'm not entirely familiar with those so I need to read up on that.
As for why we are using ext3, we are holding onto RHEL5 till the end.
469 / 495 = .947, so there's your 5 reserved sectors, and you can see the mapping between GB and GiB in my post above which accounts for the drop from 532 to 495.
If you want to convert yourself, just take the advertised size from the drive manufacturer or fdisk in GB, and multiply by (1000/1024)^3 to convert to GiB (what df reports).
You show that you are using LVM - which imposes some overhead for tracking volumes (not certain of the actual amount, but I think it around 1MB - depending on how large things are... it can grow as more disks are added to a logical volume).
Using Ext3/4 also has overheads - boot blocks, volume headers, backup headers, inode list (a block per inode), free block lists, block cluster management... When a file gets created some blocks (when the file is larger than a minimum) will hold metadata to identify other blocks where the data is - thus more overhead.
If this is layered on top of a raid, there is the raid configuration, error management/recovery overhead (more blocks used to identify the configuration, backups of the configuration,... When a logical volume includes multiple devices, the logical volume adds some overhead to be able to reconstruct the volume at boot time. The same applies to raid devices..
So there is overhead for using a raid
plus overhead for using logical volumes
plus overhead for the filesystem itself.
In addition, there is the easy confusion between a vendor selling a disk (anything that makes it look bigger is good - so using base 10 GB makes the disk look bigger even though everything with the disk uses base 2... hence the Gib reference for 1024 -> 1Kib. 1024 Kib -> 1Mib, and 1024 Mib -> 1 Gib all make disks look smaller).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.