LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   500 GB Sata Hard Drive - volume issue >> shows 452 GB (https://www.linuxquestions.org/questions/linux-hardware-18/500-gb-sata-hard-drive-volume-issue-shows-452-gb-547190/)

bskrakes 04-18-2007 09:17 PM

500 GB Sata Hard Drive - volume issue >> shows 452 GB
 
Hello all,

Ok so I know that 500 GB is actually equal to 465 GB, where 465 GB is the actual space you have to use. I just updated to Fedora Core 6 and my drive is showing only 452 GB. If I view the logical volume tool via GUI in X-windows it shows the drive as 465 GB. NO I did not setup the drive to have a 452 GB partition.... so my question is there a better way to figure out why my drive is only reading 452 GB rather than 465 GB? I could format but I don't want to since I have this baby all setup.

Here is my system layout

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
142G 3.4G 132G 3% /
/dev/sda1 99M 17M 78M 18% /boot
tmpfs 501M 0 501M 0% /dev/shm
/dev/mapper/VolGroup01-LogVol00
452G 82G 347G 20% /home
/dev/sda5 487M 13M 449M 3% /tmp


Thanks a bunch and hopefully there is an answer out there for me. Oh and I know about smartd but don't think that will help my cause, correct me if I am wrong.

osor 04-18-2007 09:59 PM

Perhaps this has to do with the filesystem you are using. I think linux/ext filesystem usually keeps some “blocks reserved for superuser” to use when the filesystem is full (by default this is 5% of total space IIRC).

bskrakes 04-18-2007 11:15 PM

Hmm, that would make sense if thats the case. I am using the ext3 file system....

Do you know if there is a way to calculate it? I also know that when I set up the system it was through a logical volume, I remember setting some setting regarding 32MB - I didn't really understand what the setting was for.

Any sites you can point me at or more information you can give me on that?

TY osor

osor 04-19-2007 09:37 AM

For each partition that’s ext2/3, try using dumpe2fs. E.g., to find out how many reserved blocks are in my boot partition (/dev/sda1), I do:
Code:

# dumpe2fs -h /dev/sda1
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name:  <none>
Last mounted on:          <not available>
Filesystem UUID:          f00a76d7-1a26-4cae-be62-1c55009a1a63
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      filetype sparse_super
Default mount options:    (none)
Filesystem state:        not clean
Errors behavior:          Continue
Filesystem OS type:      Linux
Inode count:              62248
Block count:              248976
Reserved block count:    12448
Free blocks:              228801
Free inodes:              62199
First block:              1
Block size:              1024
Fragment size:            1024
Blocks per group:        8192
Fragments per group:      8192
Inodes per group:        2008
Inode blocks per group:  251
Filesystem created:      Fri Aug 25 22:32:11 2006
Last mount time:          Thu Apr 19 10:30:54 2007
Last write time:          Thu Apr 19 10:30:54 2007
Mount count:              5
Maximum mount count:      37
Last checked:            Wed Feb 21 23:53:12 2007
Check interval:          15552000 (6 months)
Next check after:        Tue Aug 21 00:53:12 2007
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              128
Default directory hash:  tea
Directory Hash Seed:      47ca78be-fe90-4732-93e5-6e7cb1c96162

I’ve emboldened the relevant parts.

You might be able to change the settings with tune2fs.

bskrakes 04-19-2007 09:52 AM

Ok so I tried that and got this error:

[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.

I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.

Any ideas?

TY again osor! I will write this tip down for other Linux machines I have :)

osor 04-19-2007 12:04 PM

Quote:

Originally Posted by bskrakes
Ok so I tried that and got this error:

[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.

I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.

Any ideas?

TY again osor! I will write this tip down for other Linux machines I have :)

You can’t tell dumpe2fs that /dev/sdb is an ext2/3 filesystem — because it isn’t. Generally, dumpe2fs doesn’t care how your partitions are implemented (logical or physical). As long as you have a block device (in some cases an image file will suffice) which contains a single ext2 or ext3 filesystem. Try “dumpe2fs -h /dev/mapper/VolGroup01-LogVol00” (which is what I assume to be the block device which is mounted on your home partition).

bskrakes 04-19-2007 11:50 PM

Reserved Blocks -- Hard Drive shows incorret total
 
Bingo, that pulled up some info..... -->

dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122093568
Block count: 122085376
Reserved block count: 6104268
Free blocks: 100899697
Free inodes: 122075294
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 994
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32768
Inode blocks per group: 1024
Filesystem created: Wed Apr 18 21:33:39 2007
Last mount time: Fri Apr 20 21:36:45 2007
Last write time: Fri Apr 20 21:36:45 2007
Mount count: 7
Maximum mount count: -1
Last checked: Wed Apr 18 21:33:39 2007
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
First orphan inode: 37945361
Default directory hash: tea
Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d
Journal backup: inode blocks
Journal size: 128M


Alright so I am going to work with this. Thanks osor. This really helps. Now to see how this helps with my problem.

Cheers,

bskrakes 04-20-2007 12:07 AM

Ok so I totally don't know what to use those numbers for. I did find this site that converts blocks to gigabytes:

http://www.unitconversion.org/data-s...onversion.html

Any of the numbers that I convert there don't make sense. I also note that my block size is 4096. Is 4096 = Bytes, KB, MB GB?

I am not going to lie but I don't understand the block concept, the only that rings a bell in that report is the inode which I think has to do with your directory filling up?!?!?!?!?

Hope you can continue to help osor, thanks again! I also hope that you are not getting frustrated with the NOOB... haha. TY

osor 04-20-2007 08:15 AM

The numbers are simple enough to understand…

Let’s look first at your block count (122085376). This is the number of blocks in the particular filesystem. We also have to know about how big each block is — the block size (4096 bytes). So to see how much space your filesystem has for use, just multiply 122085376 × 4096 = 500061700096. We can also convert the bytes to gigabytes (more accurately gibibytes): 500061700096 × 2^(-30) = 465.71875.

We now look at the reserved block count (6104268). Let’s convert this to bytes as well: 6104268 × 4096 = 25003081728. Now, let’s subtract this from our previous total: 500061700096 - 25003081728 = 475058618368. Now, convert to gibibytes: 475058618368 × 2^(-30) = 442.432815552.

So we see that in terms of absolute space, the filesystem holds 465.71875 GiB, but in terms of space usable by a normal user, the filesystem holds 442.43282 GiB.

The use of reserved bytes “avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.” (taken from mke2fs manpage). Since this is serving as your home partition, there will never be a time where root-owned daemons need to write to the filesystem when it’s full (unless you have an unusual setup). So it is safe to get rid of the reserved blocks (i.e., turn them into normal, usable blocks) with this command: “sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00” (NOTE: DO THIS ONLY WHEN THE DEVICE IS NOT MOUNTED OR MOUNTED READ-ONLY)

DISCLAIMER: I AM NOT RESPONSIBLE FOR ANY DATA LOSS OR HEADACHES RESULTING FROM THE USE OF MY INSTRUCTIONS.

bskrakes 04-21-2007 11:51 AM

Alright makes sense now. I ran that line and here is what it output:

[root@sdm ~]# sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00
tune2fs 1.39 (29-May-2006)
Setting reserved blocks percentage to 0% (0 blocks)
Setting reserved blocks count to 0

Going to reboot and see what happens. Don't worry I made a backup of the files which are important to me. Probably a good thing to put a disclaimer there for all those who would forget or not think about backing up the important files. I will let you know.

TY osor

bskrakes 04-21-2007 12:09 PM

Hmm... well here is some info:

[root@sdm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
48G 3.4G 42G 8% /
/dev/sda1 244M 17M 215M 8% /boot
tmpfs 501M 0 501M 0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
94G 2.5G 87G 3% /home
/dev/mapper/VolGroup01-LogVol00
452G 66G 387G 15% /media

/dev/mapper/VolGroup00-LogVol01
961M 23M 889M 3% /tmp



[root@sdm ~]# dumpe2fs -h /dev/mapper/VolGroup01-LogVol00
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122093568
Block count: 122085376
Reserved block count: 0
Free blocks: 101191813
Free inodes: 122075302
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 994
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32768
Inode blocks per group: 1024
Filesystem created: Wed Apr 18 21:33:39 2007
Last mount time: Sun Apr 22 10:50:52 2007
Last write time: Sun Apr 22 10:50:52 2007
Mount count: 9
Maximum mount count: -1
Last checked: Wed Apr 18 21:33:39 2007
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d
Journal backup: inode blocks
Journal size: 128M


So it looks like it worked but the drive is still only showing 452GB.... should the system show the new changes (I did do a reboot).

TY osor.

Saritul 04-21-2007 12:18 PM

df -h
What it says?

osor 04-21-2007 02:47 PM

I am an idiot :D. The number of blocks returned by dumpe2fs is the total number of physical blocks available for use by the filesystem. The filesystem itself has a deal of overhead — blocks are divided into block groups to minimize seek times and fragmentation; there are about 255 blocks per block group that store filesystem information such as backup copies of the superblock, the block usage bitmap for the block group, the inode usage bitmap for the block group, and an inode table for the block group. The rest of the blocks are data blocks (the ones usable by you). So the total usable space (reserved or otherwise) will always be less than the total number of blocks reported by dumpe2fs. You can examine each block group’s specifics in detail by looking at the complete output of dumpe2fs (i.e., “dumpe2fs /dev/mapper/VolGroup01-LogVol00”).

So the whole thing about reserved blocks didn’t change the number of blocks reported by statfs() (and consequently the “Size” field from df). You did get some space for “free” however:

Original “df -h”
Code:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      142G  3.4G  132G  3% /
/dev/sda1              99M  17M  78M  18% /boot
tmpfs                501M    0  501M  0% /dev/shm
/dev/mapper/VolGroup01-LogVol00
                      452G  82G  347G  20% /home
/dev/sda5            487M  13M  449M  3% /tmp

Newer “df -h”
Code:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
                      48G  3.4G  42G  8% /
/dev/sda1            244M  17M  215M  8% /boot
tmpfs                501M    0  501M  0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
                      94G  2.5G  87G  3% /home
/dev/mapper/VolGroup01-LogVol00
                      452G  66G  387G  15% /media
/dev/mapper/VolGroup00-LogVol01
                      961M  23M  889M  3% /tmp

If you really wanted to, you could cut down on the space used for filesystem information (at the time of filesystem creation) by reducing the number of block groups created by mke2fs. This would, however, be futile in all but the most specialized circumstances, because read times would increase dramatically (mke2fs optimizes reading and fragmentation when choosing the number of block groups).

So the moral(s) of the story:
  • Drive manufacturers “lie” by saying 500 GigaBytes when there are only 465.71875 binary GigaBytes of physical space.
  • The number of GigaBytes usable for data will always be less (and just how much less depends on filesystem specifics).
  • Linux’s extended filesystems usually have some amount of reserved space for root to use once the filesystem is full. Depending on how this filesystem will be used, such a circumstance will not produce catastrophic outcome, and the said reserved space can safely be removed. (The use of a filesystem for a /home directory is a good example of a situation in which reserved space is not important. The use of a filesystem for / or /var is a good example of situations in which reserved space is important.)

bskrakes 04-22-2007 11:15 AM

Awesome, I am idiot to though. I missed the "original" and "new" available space. I was only looking at size rather than the available space. Other than that what you say makes sense. Once you refreshed my memory about reserved space and how Linux likes to be smart it made sense. The loss seemed high though and thats why I wanted to look into, why a 500GB hard drive (or yes those tricky manufactures who some how pull the blinds over our eyes) is actually 465GB (which I already knew that the hard drives worked like that). So when I got 452GB of available space I was puzzled.

Anyway this has been a great lesson and another wonderful wealth of knowledge to gain. Thank you osor for all of your time and comments. Have a good one!

bskrakes 04-22-2007 11:40 AM

WAIT..... that isn't right. the only reason why the space is different is because i had less files on there than before.... check the used space....

osor 04-22-2007 12:17 PM

Quote:

Originally Posted by bskrakes
WAIT..... that isn't right. the only reason why the space is different is because i had less files on there than before.... check the used space....

I think this is mostly true, but it doesn’t “add up” altogether:

Original Used Space: 82G
New Used Space: 66G
Size of Files Cleared: 16G (82-66)

Original Avail. Space: 347G
New Avail. Space: 387G
Size of Space Freed: 20G (387-347)

Amount of Space Freed Because of Elimination of Reserved Space: 4G (20-16)

So it seems the space you got “for free” isn’t all that huge.

bskrakes 04-22-2007 02:00 PM

Ok yes you are right. At first glance I see where I made the mistake. It still adds up to 452 to though.... I will have to take a good look at this because I keep going back and forth.

bskrakes 04-24-2007 11:12 AM

Ok so osor.... if you take the current system specs verse the old ones....

NEW:
[brendan@sdm ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
48G 3.4G 42G 8% /
/dev/sda1 244M 17M 215M 8% /boot
tmpfs 501M 0 501M 0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
94G 2.5G 87G 3% /home
/dev/mapper/VolGroup01-LogVol00
452G 83G 370G 19% /media

/dev/mapper/VolGroup00-LogVol01
961M 23M 890M 3% /tmp


Now if you add 370G (available space) and 83G (used space) you get 453G.... so I am failing to see where we gained the space. Maybe df -h doesn't show it?

To help more I have run this test as well:

du -s -k /media/samba | sort -k1nr | less
85887220 /media/samba/

Err I don't know.

osor 04-24-2007 04:17 PM

Quote:

Originally Posted by bskrakes
Now if you add 370G (available space) and 83G (used space) you get 453G.... so I am failing to see where we gained the space. Maybe df -h doesn't show it?

I am failing to see the problem, but I’ll ramble on anyway ;). Is your question that 453 != 452? If so, that is just rounding error. Try the same calculation using output from “df” instead of “df -h” (where it gives you block count).

In terms of gained space, it will be visible in the “Avail” field. Looking the source of df and the manpage for statvfs, we see the following:
  • The second field in df’s output is the number of data blocks on the filesystem (corresponding to the f_blocks of struct statvfs). This count includes both the reserved and non-reserved blocks
  • The fourth field in df’s output is the number of available blocks (corresponding to f_bavail). This is the number of free blocks for non-privileged processes.
  • The df utility does not report any count corresponding to f_bfree (the number of free blocks, reserved and otherwise). It does, however, use this member (which it calls “uintmax_t available_to_root”) when calculating the third and fifth fields.
So the “unreserving” of previously reserved blocks will not show up in the second field of df (since the total number of blocks never changed). It will (and did) show up in the fourth field.

P.S.
I don’t see what you are trying to do with that du command.

bskrakes 04-24-2007 05:07 PM

It is quite ok, ramble all you want. Talking a problem through with an individual who knows what there talking about is totally worth it! I guess as long as the POINT is there and not a bunch of gibberish.

Alright just using the df (I should have thought of that, just showing block size) works:

[brendan@sdm ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
49580256 3544104 43476972 8% /
/dev/sda1 248895 16671 219374 8% /boot
tmpfs 512220 0 512220 0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
98555352 2608300 90859820 3% /home
/dev/mapper/VolGroup01-LogVol00
473047768 86090024 386957744 19% /media

/dev/mapper/VolGroup00-LogVol01
983960 22812 910360 3% /tmp



So the block size is right. BUT why would df -h show different results? We cleared the reserved blocks so shouldn't is show that? I am going to try that statvfs, I do know the df stuff but just didn't think about checking the basic block size, trying to think about this one to hard I think.

Thanks again osor. Have a great night.

osor 04-24-2007 07:39 PM

Quote:

Originally Posted by bskrakes
BUT why would df -h show different results?

It’s rounding error, depending on whether you add or round first. Here’s what happens when you round after adding:

086090024 × 2^(-20) = 082.102 (083G)
386957744 × 2^(-20) = 369.032 (370G)
473047768 × 2^(-20) = 451.134 (452G)

If I round up and then add them, I get 83 + 370 = 453 != 452.

Admittedly, it might help if df had saner rounding (rather than just rounding up).

bskrakes 04-24-2007 10:37 PM

I didn't mean different results with 452/453. I meant with the block size. If the block size has changed shouldn't it show 465 instead of 453?

Cheers and sorry for the confusion!

osor 04-25-2007 11:00 AM

I’m still confused… what does block size have to do with anything? Your “true” block size (as reported by dumpe2fs and statvfs() — the size of each filesystem block) is 4kB. The size of the unit given by df to show how much space you have is 1kB. But the block size never changed. If you look at the output of dumpe2fs, you’ll still see the block size says 4096 bytes (i.e., 4kB) and the header in the output of df says “1K-blocks”. So plain “df” displays a count of how many kilobytes each filesystem has in various ways (total, used, available, etc.). When you use “df -h”, it displays the same information, but just converts it to a human-readable form — namely gigabytes.

How do you expect to get 465G from 452G anyway? As I said before, any space that was gained in unreserving will not show up in the total, but only in the available. We saw that the space gained was approximately 4G (not much), and it showed up in the available section of df.

jpmckinney 03-22-2008 12:07 PM

This what changed:

Original: 82 used + 347 avail = 429 GiB (23 short of the size of 452 GiB)
Newer: 66 used + 387 avail = 452 GiB (exactly the size of 452 GiB)

In the original, 23 GiB was reserved. Now it's not. You gained 23 GiB. You can't increase the size of your disk past 452 GiB.

Most hard drives are advertised as GB not GiB, but Linux lists filesystem sizes in terms of GiB:

One GB = 1000 MB = 1,000,000,000 bytes.
One GiB = 1024 MB (1000 MiB) = 1,073,741,824 bytes.

500 GB in GiB = 465 GiB, which is what Linux should report as the size of your 500 GB drive. Why it reports 452 GiB, I don't know. My 500 GB drive is reported as 459 GiB.

hxxp://en.wikipedia.org/wiki/Gibibyte

----

Original “df -h”
Code:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      142G  3.4G  132G  3% /
/dev/sda1              99M  17M  78M  18% /boot
tmpfs                501M    0  501M  0% /dev/shm
/dev/mapper/VolGroup01-LogVol00
                      452G  82G  347G  20% /home
/dev/sda5            487M  13M  449M  3% /tmp

Newer “df -h”
Code:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
                      48G  3.4G  42G  8% /
/dev/sda1            244M  17M  215M  8% /boot
tmpfs                501M    0  501M  0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
                      94G  2.5G  87G  3% /home
/dev/mapper/VolGroup01-LogVol00
                      452G  66G  387G  15% /media
/dev/mapper/VolGroup00-LogVol01
                      961M  23M  889M  3% /tmp


bskrakes 03-24-2008 04:18 PM

I haven't looked into this for a while! As for your comment jpmckinney that all makes sense but what I don't get is why Linux doesn't display the drive as 465 GB?????? As you said yours shows up with 459GB. Could it be the Linux file system? I notticed if I go through the GUI it shows 465 - I am going to recheck that tonight though. In Windows a 500GB drive shows up as 465GB which is what you would expect in Linux to. Maybe I have some bad sectors?!?!?!?!

Thanks for your reply!

jpmckinney 03-24-2008 11:45 PM

Maybe in Linux, it reports the size of the drive minus the filesystem overhead and journal overhead; and in Windows (and Mac OS X), it just reports the size of the drive. That's my guess.


All times are GMT -5. The time now is 02:33 PM.