LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 04-18-2007, 09:17 PM   #1
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Rep: Reputation: 32
500 GB Sata Hard Drive - volume issue >> shows 452 GB


Hello all,

Ok so I know that 500 GB is actually equal to 465 GB, where 465 GB is the actual space you have to use. I just updated to Fedora Core 6 and my drive is showing only 452 GB. If I view the logical volume tool via GUI in X-windows it shows the drive as 465 GB. NO I did not setup the drive to have a 452 GB partition.... so my question is there a better way to figure out why my drive is only reading 452 GB rather than 465 GB? I could format but I don't want to since I have this baby all setup.

Here is my system layout

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
142G 3.4G 132G 3% /
/dev/sda1 99M 17M 78M 18% /boot
tmpfs 501M 0 501M 0% /dev/shm
/dev/mapper/VolGroup01-LogVol00
452G 82G 347G 20% /home
/dev/sda5 487M 13M 449M 3% /tmp


Thanks a bunch and hopefully there is an answer out there for me. Oh and I know about smartd but don't think that will help my cause, correct me if I am wrong.
 
Old 04-18-2007, 09:59 PM   #2
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
Perhaps this has to do with the filesystem you are using. I think linux/ext filesystem usually keeps some “blocks reserved for superuser” to use when the filesystem is full (by default this is 5% of total space IIRC).
 
Old 04-18-2007, 11:15 PM   #3
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Hmm, that would make sense if thats the case. I am using the ext3 file system....

Do you know if there is a way to calculate it? I also know that when I set up the system it was through a logical volume, I remember setting some setting regarding 32MB - I didn't really understand what the setting was for.

Any sites you can point me at or more information you can give me on that?

TY osor
 
Old 04-19-2007, 09:37 AM   #4
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
For each partition that’s ext2/3, try using dumpe2fs. E.g., to find out how many reserved blocks are in my boot partition (/dev/sda1), I do:
Code:
# dumpe2fs -h /dev/sda1
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          f00a76d7-1a26-4cae-be62-1c55009a1a63
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      filetype sparse_super
Default mount options:    (none)
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              62248
Block count:              248976
Reserved block count:     12448
Free blocks:              228801
Free inodes:              62199
First block:              1
Block size:               1024
Fragment size:            1024
Blocks per group:         8192
Fragments per group:      8192
Inodes per group:         2008
Inode blocks per group:   251
Filesystem created:       Fri Aug 25 22:32:11 2006
Last mount time:          Thu Apr 19 10:30:54 2007
Last write time:          Thu Apr 19 10:30:54 2007
Mount count:              5
Maximum mount count:      37
Last checked:             Wed Feb 21 23:53:12 2007
Check interval:           15552000 (6 months)
Next check after:         Tue Aug 21 00:53:12 2007
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Default directory hash:   tea
Directory Hash Seed:      47ca78be-fe90-4732-93e5-6e7cb1c96162
I’ve emboldened the relevant parts.

You might be able to change the settings with tune2fs.
 
Old 04-19-2007, 09:52 AM   #5
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Ok so I tried that and got this error:

[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.

I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.

Any ideas?

TY again osor! I will write this tip down for other Linux machines I have
 
Old 04-19-2007, 12:04 PM   #6
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
Quote:
Originally Posted by bskrakes
Ok so I tried that and got this error:

[root@sdm ~]# dumpe2fs -h /dev/sdb
dumpe2fs 1.39 (29-May-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
Couldn't find valid filesystem superblock.

I am pretty sure I got this error because I am using Logical Volumes rather than straight out partitions.

Any ideas?

TY again osor! I will write this tip down for other Linux machines I have
You can’t tell dumpe2fs that /dev/sdb is an ext2/3 filesystem — because it isn’t. Generally, dumpe2fs doesn’t care how your partitions are implemented (logical or physical). As long as you have a block device (in some cases an image file will suffice) which contains a single ext2 or ext3 filesystem. Try “dumpe2fs -h /dev/mapper/VolGroup01-LogVol00” (which is what I assume to be the block device which is mounted on your home partition).
 
Old 04-19-2007, 11:50 PM   #7
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Reserved Blocks -- Hard Drive shows incorret total

Bingo, that pulled up some info..... -->

dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122093568
Block count: 122085376
Reserved block count: 6104268
Free blocks: 100899697
Free inodes: 122075294
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 994
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32768
Inode blocks per group: 1024
Filesystem created: Wed Apr 18 21:33:39 2007
Last mount time: Fri Apr 20 21:36:45 2007
Last write time: Fri Apr 20 21:36:45 2007
Mount count: 7
Maximum mount count: -1
Last checked: Wed Apr 18 21:33:39 2007
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
First orphan inode: 37945361
Default directory hash: tea
Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d
Journal backup: inode blocks
Journal size: 128M


Alright so I am going to work with this. Thanks osor. This really helps. Now to see how this helps with my problem.

Cheers,
 
Old 04-20-2007, 12:07 AM   #8
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Ok so I totally don't know what to use those numbers for. I did find this site that converts blocks to gigabytes:

http://www.unitconversion.org/data-s...onversion.html

Any of the numbers that I convert there don't make sense. I also note that my block size is 4096. Is 4096 = Bytes, KB, MB GB?

I am not going to lie but I don't understand the block concept, the only that rings a bell in that report is the inode which I think has to do with your directory filling up?!?!?!?!?

Hope you can continue to help osor, thanks again! I also hope that you are not getting frustrated with the NOOB... haha. TY

Last edited by bskrakes; 04-20-2007 at 12:09 AM.
 
Old 04-20-2007, 08:15 AM   #9
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
The numbers are simple enough to understand…

Let’s look first at your block count (122085376). This is the number of blocks in the particular filesystem. We also have to know about how big each block is — the block size (4096 bytes). So to see how much space your filesystem has for use, just multiply 122085376 × 4096 = 500061700096. We can also convert the bytes to gigabytes (more accurately gibibytes): 500061700096 × 2^(-30) = 465.71875.

We now look at the reserved block count (6104268). Let’s convert this to bytes as well: 6104268 × 4096 = 25003081728. Now, let’s subtract this from our previous total: 500061700096 - 25003081728 = 475058618368. Now, convert to gibibytes: 475058618368 × 2^(-30) = 442.432815552.

So we see that in terms of absolute space, the filesystem holds 465.71875 GiB, but in terms of space usable by a normal user, the filesystem holds 442.43282 GiB.

The use of reserved bytes “avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.” (taken from mke2fs manpage). Since this is serving as your home partition, there will never be a time where root-owned daemons need to write to the filesystem when it’s full (unless you have an unusual setup). So it is safe to get rid of the reserved blocks (i.e., turn them into normal, usable blocks) with this command: “sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00” (NOTE: DO THIS ONLY WHEN THE DEVICE IS NOT MOUNTED OR MOUNTED READ-ONLY)

DISCLAIMER: I AM NOT RESPONSIBLE FOR ANY DATA LOSS OR HEADACHES RESULTING FROM THE USE OF MY INSTRUCTIONS.

Last edited by osor; 04-20-2007 at 09:01 AM.
 
Old 04-21-2007, 11:51 AM   #10
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Alright makes sense now. I ran that line and here is what it output:

[root@sdm ~]# sudo tune2fs -m 0 -r 0 /dev/mapper/VolGroup01-LogVol00
tune2fs 1.39 (29-May-2006)
Setting reserved blocks percentage to 0% (0 blocks)
Setting reserved blocks count to 0

Going to reboot and see what happens. Don't worry I made a backup of the files which are important to me. Probably a good thing to put a disclaimer there for all those who would forget or not think about backing up the important files. I will let you know.

TY osor
 
Old 04-21-2007, 12:09 PM   #11
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Hmm... well here is some info:

[root@sdm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
48G 3.4G 42G 8% /
/dev/sda1 244M 17M 215M 8% /boot
tmpfs 501M 0 501M 0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
94G 2.5G 87G 3% /home
/dev/mapper/VolGroup01-LogVol00
452G 66G 387G 15% /media

/dev/mapper/VolGroup00-LogVol01
961M 23M 889M 3% /tmp



[root@sdm ~]# dumpe2fs -h /dev/mapper/VolGroup01-LogVol00
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 744fbac6-6c6d-4b62-bec5-69020ff0ad96
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122093568
Block count: 122085376
Reserved block count: 0
Free blocks: 101191813
Free inodes: 122075302
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 994
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32768
Inode blocks per group: 1024
Filesystem created: Wed Apr 18 21:33:39 2007
Last mount time: Sun Apr 22 10:50:52 2007
Last write time: Sun Apr 22 10:50:52 2007
Mount count: 9
Maximum mount count: -1
Last checked: Wed Apr 18 21:33:39 2007
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: da8ff037-0329-454b-b6d6-77e55810347d
Journal backup: inode blocks
Journal size: 128M


So it looks like it worked but the drive is still only showing 452GB.... should the system show the new changes (I did do a reboot).

TY osor.
 
Old 04-21-2007, 12:18 PM   #12
Saritul
LQ Newbie
 
Registered: Apr 2007
Posts: 9

Rep: Reputation: 0
Cool

df -h
What it says?

Last edited by Saritul; 04-21-2007 at 12:23 PM.
 
Old 04-21-2007, 02:47 PM   #13
osor
HCL Maintainer
 
Registered: Jan 2006
Distribution: (H)LFS, Gentoo
Posts: 2,450

Rep: Reputation: 78
I am an idiot . The number of blocks returned by dumpe2fs is the total number of physical blocks available for use by the filesystem. The filesystem itself has a deal of overhead — blocks are divided into block groups to minimize seek times and fragmentation; there are about 255 blocks per block group that store filesystem information such as backup copies of the superblock, the block usage bitmap for the block group, the inode usage bitmap for the block group, and an inode table for the block group. The rest of the blocks are data blocks (the ones usable by you). So the total usable space (reserved or otherwise) will always be less than the total number of blocks reported by dumpe2fs. You can examine each block group’s specifics in detail by looking at the complete output of dumpe2fs (i.e., “dumpe2fs /dev/mapper/VolGroup01-LogVol00”).

So the whole thing about reserved blocks didn’t change the number of blocks reported by statfs() (and consequently the “Size” field from df). You did get some space for “free” however:

Original “df -h”
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      142G  3.4G  132G   3% /
/dev/sda1              99M   17M   78M  18% /boot
tmpfs                 501M     0  501M   0% /dev/shm
/dev/mapper/VolGroup01-LogVol00
                      452G   82G  347G  20% /home
/dev/sda5             487M   13M  449M   3% /tmp
Newer “df -h”
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
                       48G  3.4G   42G   8% /
/dev/sda1             244M   17M  215M   8% /boot
tmpfs                 501M     0  501M   0% /dev/shm
/dev/mapper/VolGroup00-LogVol03
                       94G  2.5G   87G   3% /home
/dev/mapper/VolGroup01-LogVol00
                      452G   66G  387G  15% /media
/dev/mapper/VolGroup00-LogVol01
                      961M   23M  889M   3% /tmp
If you really wanted to, you could cut down on the space used for filesystem information (at the time of filesystem creation) by reducing the number of block groups created by mke2fs. This would, however, be futile in all but the most specialized circumstances, because read times would increase dramatically (mke2fs optimizes reading and fragmentation when choosing the number of block groups).

So the moral(s) of the story:
  • Drive manufacturers “lie” by saying 500 GigaBytes when there are only 465.71875 binary GigaBytes of physical space.
  • The number of GigaBytes usable for data will always be less (and just how much less depends on filesystem specifics).
  • Linux’s extended filesystems usually have some amount of reserved space for root to use once the filesystem is full. Depending on how this filesystem will be used, such a circumstance will not produce catastrophic outcome, and the said reserved space can safely be removed. (The use of a filesystem for a /home directory is a good example of a situation in which reserved space is not important. The use of a filesystem for / or /var is a good example of situations in which reserved space is important.)
 
Old 04-22-2007, 11:15 AM   #14
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
Awesome, I am idiot to though. I missed the "original" and "new" available space. I was only looking at size rather than the available space. Other than that what you say makes sense. Once you refreshed my memory about reserved space and how Linux likes to be smart it made sense. The loss seemed high though and thats why I wanted to look into, why a 500GB hard drive (or yes those tricky manufactures who some how pull the blinds over our eyes) is actually 465GB (which I already knew that the hard drives worked like that). So when I got 452GB of available space I was puzzled.

Anyway this has been a great lesson and another wonderful wealth of knowledge to gain. Thank you osor for all of your time and comments. Have a good one!
 
Old 04-22-2007, 11:40 AM   #15
bskrakes
Member
 
Registered: Sep 2006
Location: Canada, Alberta
Distribution: RHEL 4 and up, CentOS 5.x, Fedora Core 5 and up, Ubuntu 8 and up
Posts: 251

Original Poster
Rep: Reputation: 32
WAIT..... that isn't right. the only reason why the space is different is because i had less files on there than before.... check the used space....
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Sata Raid 1 and Single Sata drive drive order issue Kvetch Linux - Hardware 5 03-19-2007 06:50 PM
Logical Volume and New Hard Drive Hambone_20003 Linux - Hardware 2 02-16-2007 08:27 PM
320GB hard drive shows up as 150GB bluelightning Linux - Hardware 4 07-03-2006 06:26 AM
how to install a usb 2.0 hard drive (500 GB) on red hat WS3 U4 cmolina Linux - Hardware 2 08-08-2005 02:04 PM
Does Anyone Know About The Hard Drive Volume Group harley51 Fedora 1 11-21-2004 07:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 10:57 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration