500 GB Sata Hard Drive - volume issue >> shows 452 GB
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251
Rep:
500 GB Sata Hard Drive - volume issue >> shows 452 GB
Hello all,
Ok so I know that 500 GB is actually equal to 465 GB, where 465 GB is the actual space you have to use. I just updated to Fedora Core 6 and my drive is showing only 452 GB. If I view the logical volume tool via GUI in X-windows it shows the drive as 465 GB. NO I did not setup the drive to have a 452 GB partition.... so my question is there a better way to figure out why my drive is only reading 452 GB rather than 465 GB? I could format but I don't want to since I have this baby all setup.
Thanks a bunch and hopefully there is an answer out there for me. Oh and I know about smartd but don't think that will help my cause, correct me if I am wrong.
Perhaps this has to do with the filesystem you are using. I think linux/ext filesystem usually keeps some “blocks reserved for superuser” to use when the filesystem is full (by default this is 5% of total space IIRC).
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251
Original Poster
Rep:
Hmm, that would make sense if thats the case. I am using the ext3 file system....
Do you know if there is a way to calculate it? I also know that when I set up the system it was through a logical volume, I remember setting some setting regarding 32MB - I didn't really understand what the setting was for.
Any sites you can point me at or more information you can give me on that?
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251
Original Poster
Rep:
Ok so I tried that and got this error:
[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.
I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.
Any ideas?
TY again osor! I will write this tip down for other Linux machines I have
[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.
I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.
Any ideas?
TY again osor! I will write this tip down for other Linux machines I have
You can’t tell dumpe2fs that /dev/sdb is an ext2/3 filesystem — because it isn’t. Generally, dumpe2fs doesn’t care how your partitions are implemented (logical or physical). As long as you have a block device (in some cases an image file will suffice) which contains a single ext2 or ext3 filesystem. Try “dumpe2fs -h /dev/mapper/VolGroup01-LogVol00” (which is what I assume to be the block device which is mounted on your home partition).
Any of the numbers that I convert there don't make sense. I also note that my block size is 4096. Is 4096 = Bytes, KB, MB GB?
I am not going to lie but I don't understand the block concept, the only that rings a bell in that report is the inode which I think has to do with your directory filling up?!?!?!?!?
Hope you can continue to help osor, thanks again! I also hope that you are not getting frustrated with the NOOB... haha. TY
Let’s look first at your block count (122085376). This is the number of blocks in the particular filesystem. We also have to know about how big each block is — the block size (4096 bytes). So to see how much space your filesystem has for use, just multiply 122085376 × 4096 = 500061700096. We can also convert the bytes to gigabytes (more accurately gibibytes): 500061700096 × 2^(-30) = 465.71875.
We now look at the reserved block count (6104268). Let’s convert this to bytes as well: 6104268 × 4096 = 25003081728. Now, let’s subtract this from our previous total: 500061700096 - 25003081728 = 475058618368. Now, convert to gibibytes: 475058618368 × 2^(-30) = 442.432815552.
So we see that in terms of absolute space, the filesystem holds 465.71875 GiB, but in terms of space usable by a normal user, the filesystem holds 442.43282 GiB.
The use of reserved bytes “avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.” (taken from mke2fs manpage). Since this is serving as your home partition, there will never be a time where root-owned daemons need to write to the filesystem when it’s full (unless you have an unusual setup). So it is safe to get rid of the reserved blocks (i.e., turn them into normal, usable blocks) with this command: “sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00” (NOTE: DO THIS ONLY WHEN THE DEVICE IS NOT MOUNTED OR MOUNTED READ-ONLY)
DISCLAIMER: I AM NOT RESPONSIBLE FOR ANY DATA LOSS OR HEADACHES RESULTING FROM THE USE OF MY INSTRUCTIONS.
Going to reboot and see what happens. Don't worry I made a backup of the files which are important to me. Probably a good thing to put a disclaimer there for all those who would forget or not think about backing up the important files. I will let you know.
I am an idiot . The number of blocks returned by dumpe2fs is the total number of physical blocks available for use by the filesystem. The filesystem itself has a deal of overhead — blocks are divided into block groups to minimize seek times and fragmentation; there are about 255 blocks per block group that store filesystem information such as backup copies of the superblock, the block usage bitmap for the block group, the inode usage bitmap for the block group, and an inode table for the block group. The rest of the blocks are data blocks (the ones usable by you). So the total usable space (reserved or otherwise) will always be less than the total number of blocks reported by dumpe2fs. You can examine each block group’s specifics in detail by looking at the complete output of dumpe2fs (i.e., “dumpe2fs /dev/mapper/VolGroup01-LogVol00”).
So the whole thing about reserved blocks didn’t change the number of blocks reported by statfs() (and consequently the “Size” field from df). You did get some space for “free” however:
If you really wanted to, you could cut down on the space used for filesystem information (at the time of filesystem creation) by reducing the number of block groups created by mke2fs. This would, however, be futile in all but the most specialized circumstances, because read times would increase dramatically (mke2fs optimizes reading and fragmentation when choosing the number of block groups).
So the moral(s) of the story:
Drive manufacturers “lie” by saying 500 GigaBytes when there are only 465.71875 binary GigaBytes of physical space.
The number of GigaBytes usable for data will always be less (and just how much less depends on filesystem specifics).
Linux’s extended filesystems usually have some amount of reserved space for root to use once the filesystem is full. Depending on how this filesystem will be used, such a circumstance will not produce catastrophic outcome, and the said reserved space can safely be removed. (The use of a filesystem for a /home directory is a good example of a situation in which reserved space is not important. The use of a filesystem for / or /var is a good example of situations in which reserved space is important.)
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251
Original Poster
Rep:
Awesome, I am idiot to though. I missed the "original" and "new" available space. I was only looking at size rather than the available space. Other than that what you say makes sense. Once you refreshed my memory about reserved space and how Linux likes to be smart it made sense. The loss seemed high though and thats why I wanted to look into, why a 500GB hard drive (or yes those tricky manufactures who some how pull the blinds over our eyes) is actually 465GB (which I already knew that the hard drives worked like that). So when I got 452GB of available space I was puzzled.
Anyway this has been a great lesson and another wonderful wealth of knowledge to gain. Thank you osor for all of your time and comments. Have a good one!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.