500 GB Sata Hard Drive - volume issue >> shows 452 GB
Hello all,
Ok so I know that 500 GB is actually equal to 465 GB, where 465 GB is the actual space you have to use. I just updated to Fedora Core 6 and my drive is showing only 452 GB. If I view the logical volume tool via GUI in X-windows it shows the drive as 465 GB. NO I did not setup the drive to have a 452 GB partition.... so my question is there a better way to figure out why my drive is only reading 452 GB rather than 465 GB? I could format but I don't want to since I have this baby all setup. Here is my system layout Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 142G 3.4G 132G 3% / /dev/sda1 99M 17M 78M 18% /boot tmpfs 501M 0 501M 0% /dev/shm /dev/mapper/VolGroup01-LogVol00 452G 82G 347G 20% /home /dev/sda5 487M 13M 449M 3% /tmp Thanks a bunch and hopefully there is an answer out there for me. Oh and I know about smartd but don't think that will help my cause, correct me if I am wrong. |
Perhaps this has to do with the filesystem you are using. I think linux/ext filesystem usually keeps some “blocks reserved for superuser” to use when the filesystem is full (by default this is 5% of total space IIRC).
|
Hmm, that would make sense if thats the case. I am using the ext3 file system....
Do you know if there is a way to calculate it? I also know that when I set up the system it was through a logical volume, I remember setting some setting regarding 32MB - I didn't really understand what the setting was for. Any sites you can point me at or more information you can give me on that? TY osor |
For each partition that’s ext2/3, try using dumpe2fs. E.g., to find out how many reserved blocks are in my boot partition (/dev/sda1), I do:
Code:
# dumpe2fs -h /dev/sda1 You might be able to change the settings with tune2fs. |
Ok so I tried that and got this error:
[root@sdm ~]# dumpe2fs -h /dev/sdb dumpe2fs 1.39 (29-May-2006) dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions. Any ideas? TY again osor! I will write this tip down for other Linux machines I have :) |
Quote:
|
Reserved Blocks -- Hard Drive shows incorret total
Bingo, that pulled up some info..... -->
dumpe2fs 1.39 (29-May-2006) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122093568 Block count: 122085376 Reserved block count: 6104268 Free blocks: 100899697 Free inodes: 122075294 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 994 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 1024 Filesystem created: Wed Apr 18 21:33:39 2007 Last mount time: Fri Apr 20 21:36:45 2007 Last write time: Fri Apr 20 21:36:45 2007 Mount count: 7 Maximum mount count: -1 Last checked: Wed Apr 18 21:33:39 2007 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 First orphan inode: 37945361 Default directory hash: tea Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d Journal backup: inode blocks Journal size: 128M Alright so I am going to work with this. Thanks osor. This really helps. Now to see how this helps with my problem. Cheers, |
Ok so I totally don't know what to use those numbers for. I did find this site that converts blocks to gigabytes:
http://www.unitconversion.org/data-s...onversion.html Any of the numbers that I convert there don't make sense. I also note that my block size is 4096. Is 4096 = Bytes, KB, MB GB? I am not going to lie but I don't understand the block concept, the only that rings a bell in that report is the inode which I think has to do with your directory filling up?!?!?!?!? Hope you can continue to help osor, thanks again! I also hope that you are not getting frustrated with the NOOB... haha. TY |
The numbers are simple enough to understand…
Let’s look first at your block count (122085376). This is the number of blocks in the particular filesystem. We also have to know about how big each block is — the block size (4096 bytes). So to see how much space your filesystem has for use, just multiply 122085376 × 4096 = 500061700096. We can also convert the bytes to gigabytes (more accurately gibibytes): 500061700096 × 2^(-30) = 465.71875. We now look at the reserved block count (6104268). Let’s convert this to bytes as well: 6104268 × 4096 = 25003081728. Now, let’s subtract this from our previous total: 500061700096 - 25003081728 = 475058618368. Now, convert to gibibytes: 475058618368 × 2^(-30) = 442.432815552. So we see that in terms of absolute space, the filesystem holds 465.71875 GiB, but in terms of space usable by a normal user, the filesystem holds 442.43282 GiB. The use of reserved bytes “avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.” (taken from mke2fs manpage). Since this is serving as your home partition, there will never be a time where root-owned daemons need to write to the filesystem when it’s full (unless you have an unusual setup). So it is safe to get rid of the reserved blocks (i.e., turn them into normal, usable blocks) with this command: “sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00” (NOTE: DO THIS ONLY WHEN THE DEVICE IS NOT MOUNTED OR MOUNTED READ-ONLY) DISCLAIMER: I AM NOT RESPONSIBLE FOR ANY DATA LOSS OR HEADACHES RESULTING FROM THE USE OF MY INSTRUCTIONS. |
Alright makes sense now. I ran that line and here is what it output:
[root@sdm ~]# sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00 tune2fs 1.39 (29-May-2006) Setting reserved blocks percentage to 0% (0 blocks) Setting reserved blocks count to 0 Going to reboot and see what happens. Don't worry I made a backup of the files which are important to me. Probably a good thing to put a disclaimer there for all those who would forget or not think about backing up the important files. I will let you know. TY osor |
Hmm... well here is some info:
[root@sdm ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol02 48G 3.4G 42G 8% / /dev/sda1 244M 17M 215M 8% /boot tmpfs 501M 0 501M 0% /dev/shm /dev/mapper/VolGroup00-LogVol03 94G 2.5G 87G 3% /home /dev/mapper/VolGroup01-LogVol00 452G 66G 387G 15% /media /dev/mapper/VolGroup00-LogVol01 961M 23M 889M 3% /tmp [root@sdm ~]# dumpe2fs -h /dev/mapper/VolGroup01-LogVol00 dumpe2fs 1.39 (29-May-2006) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122093568 Block count: 122085376 Reserved block count: 0 Free blocks: 101191813 Free inodes: 122075302 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 994 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 1024 Filesystem created: Wed Apr 18 21:33:39 2007 Last mount time: Sun Apr 22 10:50:52 2007 Last write time: Sun Apr 22 10:50:52 2007 Mount count: 9 Maximum mount count: -1 Last checked: Wed Apr 18 21:33:39 2007 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: tea Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d Journal backup: inode blocks Journal size: 128M So it looks like it worked but the drive is still only showing 452GB.... should the system show the new changes (I did do a reboot). TY osor. |
df -h
What it says? |
I am an idiot :D. The number of blocks returned by dumpe2fs is the total number of physical blocks available for use by the filesystem. The filesystem itself has a deal of overhead — blocks are divided into block groups to minimize seek times and fragmentation; there are about 255 blocks per block group that store filesystem information such as backup copies of the superblock, the block usage bitmap for the block group, the inode usage bitmap for the block group, and an inode table for the block group. The rest of the blocks are data blocks (the ones usable by you). So the total usable space (reserved or otherwise) will always be less than the total number of blocks reported by dumpe2fs. You can examine each block group’s specifics in detail by looking at the complete output of dumpe2fs (i.e., “dumpe2fs /dev/mapper/VolGroup01-LogVol00”).
So the whole thing about reserved blocks didn’t change the number of blocks reported by statfs() (and consequently the “Size” field from df). You did get some space for “free” however: Original “df -h” Code:
Filesystem Size Used Avail Use% Mounted on Code:
Filesystem Size Used Avail Use% Mounted on So the moral(s) of the story:
|
Awesome, I am idiot to though. I missed the "original" and "new" available space. I was only looking at size rather than the available space. Other than that what you say makes sense. Once you refreshed my memory about reserved space and how Linux likes to be smart it made sense. The loss seemed high though and thats why I wanted to look into, why a 500GB hard drive (or yes those tricky manufactures who some how pull the blinds over our eyes) is actually 465GB (which I already knew that the hard drives worked like that). So when I got 452GB of available space I was puzzled.
Anyway this has been a great lesson and another wonderful wealth of knowledge to gain. Thank you osor for all of your time and comments. Have a good one! |
WAIT..... that isn't right. the only reason why the space is different is because i had less files on there than before.... check the used space....
|
Quote:
Original Used Space: 82G New Used Space: 66G Size of Files Cleared: 16G (82-66) Original Avail. Space: 347G New Avail. Space: 387G Size of Space Freed: 20G (387-347) Amount of Space Freed Because of Elimination of Reserved Space: 4G (20-16) So it seems the space you got “for free” isn’t all that huge. |
Ok yes you are right. At first glance I see where I made the mistake. It still adds up to 452 to though.... I will have to take a good look at this because I keep going back and forth.
|
Ok so osor.... if you take the current system specs verse the old ones....
NEW: [brendan@sdm ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol02 48G 3.4G 42G 8% / /dev/sda1 244M 17M 215M 8% /boot tmpfs 501M 0 501M 0% /dev/shm /dev/mapper/VolGroup00-LogVol03 94G 2.5G 87G 3% /home /dev/mapper/VolGroup01-LogVol00 452G 83G 370G 19% /media /dev/mapper/VolGroup00-LogVol01 961M 23M 890M 3% /tmp Now if you add 370G (available space) and 83G (used space) you get 453G.... so I am failing to see where we gained the space. Maybe df -h doesn't show it? To help more I have run this test as well: du -s -k /media/samba | sort -k1nr | less 85887220 /media/samba/ Err I don't know. |
Quote:
In terms of gained space, it will be visible in the “Avail” field. Looking the source of df and the manpage for statvfs, we see the following:
P.S. I don’t see what you are trying to do with that du command. |
It is quite ok, ramble all you want. Talking a problem through with an individual who knows what there talking about is totally worth it! I guess as long as the POINT is there and not a bunch of gibberish.
Alright just using the df (I should have thought of that, just showing block size) works: [brendan@sdm ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol02 49580256 3544104 43476972 8% / /dev/sda1 248895 16671 219374 8% /boot tmpfs 512220 0 512220 0% /dev/shm /dev/mapper/VolGroup00-LogVol03 98555352 2608300 90859820 3% /home /dev/mapper/VolGroup01-LogVol00 473047768 86090024 386957744 19% /media /dev/mapper/VolGroup00-LogVol01 983960 22812 910360 3% /tmp So the block size is right. BUT why would df -h show different results? We cleared the reserved blocks so shouldn't is show that? I am going to try that statvfs, I do know the df stuff but just didn't think about checking the basic block size, trying to think about this one to hard I think. Thanks again osor. Have a great night. |
Quote:
086090024 × 2^(-20) = 082.102 (083G) 386957744 × 2^(-20) = 369.032 (370G) 473047768 × 2^(-20) = 451.134 (452G) If I round up and then add them, I get 83 + 370 = 453 != 452. Admittedly, it might help if df had saner rounding (rather than just rounding up). |
I didn't mean different results with 452/453. I meant with the block size. If the block size has changed shouldn't it show 465 instead of 453?
Cheers and sorry for the confusion! |
I’m still confused… what does block size have to do with anything? Your “true” block size (as reported by dumpe2fs and statvfs() — the size of each filesystem block) is 4kB. The size of the unit given by df to show how much space you have is 1kB. But the block size never changed. If you look at the output of dumpe2fs, you’ll still see the block size says 4096 bytes (i.e., 4kB) and the header in the output of df says “1K-blocks”. So plain “df” displays a count of how many kilobytes each filesystem has in various ways (total, used, available, etc.). When you use “df -h”, it displays the same information, but just converts it to a human-readable form — namely gigabytes.
How do you expect to get 465G from 452G anyway? As I said before, any space that was gained in unreserving will not show up in the total, but only in the available. We saw that the space gained was approximately 4G (not much), and it showed up in the available section of df. |
This what changed:
Original: 82 used + 347 avail = 429 GiB (23 short of the size of 452 GiB) Newer: 66 used + 387 avail = 452 GiB (exactly the size of 452 GiB) In the original, 23 GiB was reserved. Now it's not. You gained 23 GiB. You can't increase the size of your disk past 452 GiB. Most hard drives are advertised as GB not GiB, but Linux lists filesystem sizes in terms of GiB: One GB = 1000 MB = 1,000,000,000 bytes. One GiB = 1024 MB (1000 MiB) = 1,073,741,824 bytes. 500 GB in GiB = 465 GiB, which is what Linux should report as the size of your 500 GB drive. Why it reports 452 GiB, I don't know. My 500 GB drive is reported as 459 GiB. hxxp://en.wikipedia.org/wiki/Gibibyte ---- Original “df -h” Code:
Filesystem Size Used Avail Use% Mounted on Code:
Filesystem Size Used Avail Use% Mounted on |
I haven't looked into this for a while! As for your comment jpmckinney that all makes sense but what I don't get is why Linux doesn't display the drive as 465 GB?????? As you said yours shows up with 459GB. Could it be the Linux file system? I notticed if I go through the GUI it shows 465 - I am going to recheck that tonight though. In Windows a 500GB drive shows up as 465GB which is what you would expect in Linux to. Maybe I have some bad sectors?!?!?!?!
Thanks for your reply! |
Maybe in Linux, it reports the size of the drive minus the filesystem overhead and journal overhead; and in Windows (and Mac OS X), it just reports the size of the drive. That's my guess.
|
All times are GMT -5. The time now is 02:33 PM. |