Linux - DesktopThis forum is for the discussion of all Linux Software used in a desktop context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am using LVM on my system. In my system a LV data is mounted whose lvdisplay shos:
Code:
# lvdisplay
--- Logical volume ---
LV Name /dev/system/data
VG Name system
LV UUID PWxv0a-zRUd-b3yP-bDCk-2W2L-L9Nk-PvKbsC
LV Write Access read/write
LV Status available
# open 1
LV Size 303.00 GiB
Current LE 77568
Segments 5
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
Notice the size of LV to be 303GiB
The problem is that df -h shows the following:
Sometimes it's rounding; sometimes overhead. Or a combination of the two.
As you are using ext4, I assume you have a fairly modern system and are not running into some sort of old algorithm deficiency... But that's not out of the question.
Quote:
Is this normal??
Yes. The numbers between fdisk (if your filesystem is directly on a partition) or LVM, and the various reports of space used ('df', 'dumpe2fs -h', etc.), often differ. Maybe "always differ" would be more accurate.
I'm curious. What is your LE size for that LVM LV? ('vgdisplay' will tell you. Please post that.) I can't work my way back to it from the LV Size of 303.00 GiB, and the Current LE of 77568 that you provided.
Also, could you do a 'dumpe2fs -h /dev/mapper/system-data' and post that?
By default an ext2/3/4 reserves 5% for root. This is to allow root to login in case the file system becomes full and reduce fragmentation. Reserved space does not appear as used so the numbers do not add up. If the LV is just for data then it is safe to change reserve space to zero. The difference between the LV size and file system size is due to overhead. Overhead consists of the inode table and all of the superblocks.
There's the right answer. Also note that the reserved blocks can be tweaked using tune2fs(8). FYI, from its manpages:
Code:
Reserving some number of filesystem
blocks for use by privileged processes is done to avoid filesys‐
tem fragmentation, and to allow system daemons, such as sys‐
logd(8), to continue to function correctly after non-privileged
processes are prevented from writing to the filesystem. Nor‐
mally, the default percentage of reserved blocks is 5%.
The 1.00GiB allocation is 32 Logical extents that are 32.00 MIb.
Code:
[root@athlon ~]# lvdisplay /dev/mapper/athlon-temp
--- Logical volume ---
LV Name /dev/athlon/temp
VG Name athlon
LV UUID qUhOa6-wBID-HUxc-PCgY-JKo8-bSY1-kBomIP
LV Write Access read/write
LV Status available
# open 0
LV Size 1.00 GiB
Current LE 32
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
[root@athlon ~]# vgdisplay
--- Volume group ---
VG Name athlon
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.36 TiB
PE Size 32.00 MiB
Total PE 44709
Alloc PE / Size 13504 / 422.00 GiB
Free PE / Size 31205 / 975.16 GiB
VG UUID N3dKtB-3VYX-hK9n-AbtL-Y5jQ-CnXo-XbLkHh
Note: LVM does good math. "-L1G" allocates 32 32.00MiB PE's. 32*32MB=1024MB. 1MiB = 1024*1024 = 1,048,576 like it should be. 1GiB = 1024*1024*1024 = 1,073,741,824.
Next, format it with ext4 and default parameters;
Code:
[root@athlon ~]# mkfs.ext4 /dev/mapper/athlon-temp
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Note: Good math here. "262144" blocks of "Block size=4096" = 1,073,741,824.
1,073,741,824 is 1GiB. (1024 * 1024 * 1024 = 1,073,741,824.)
Finally, do some displays.
Code:
[root@athlon ~]# mount /dev/mapper/athlon-temp /mnt
[root@athlon ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/athlon-temp
1008M 34M 924M 4% /mnt
1008M?
Code:
[root@athlon ~]# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/athlon-temp
1032088 34052 945608 4% /mnt
1032088 1K-blocks?
There is no clue in a dump of the superblock.
Code:
[root@athlon ~]# dumpe2fs -h /dev/mapper/athlon-temp
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: e8b127b1-3e3e-496f-bde3-797c5c23a0a0
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 65536
Block count: 262144
Reserved block count: 13107
Free blocks: 249509
Free inodes: 65525
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 63
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Wed Jul 6 11:08:03 2011
Last mount time: Wed Jul 6 11:08:33 2011
Last write time: Wed Jul 6 11:08:33 2011
Mount count: 1
Maximum mount count: 21
Last checked: Wed Jul 6 11:08:03 2011
Check interval: 15552000 (6 months)
Next check after: Mon Jan 2 10:08:03 2012
Lifetime writes: 48 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: e2dee7d6-aa56-4d29-926b-20909f69cd45
Journal backup: inode blocks
Journal features: (none)
Journal size: 32M
Journal length: 8192
Journal sequence: 0x00000001
Journal start: 0
The "Size" is, of course, unchanged; "Used" is unchanged (the 34M is overhead - the top directory, inodes, journal, etc.).
"Avail" reflects the reduction caused by the increased reserved number of blocks.
Code:
[root@athlon ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/athlon-temp
1008M 34M 585M 6% /mnt
All that's changed in the superblock is the "Reserved block count".
Code:
[root@athlon ~]# dumpe2fs -h /dev/mapper/athlon-temp
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: e8b127b1-3e3e-496f-bde3-797c5c23a0a0
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 65536
Block count: 262144
Reserved block count: 100000
Free blocks: 249509
Free inodes: 65525
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 63
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Wed Jul 6 11:08:03 2011
Last mount time: Wed Jul 6 11:08:33 2011
Last write time: Wed Jul 6 14:40:44 2011
Mount count: 1
Maximum mount count: 21
Last checked: Wed Jul 6 11:08:03 2011
Check interval: 15552000 (6 months)
Next check after: Mon Jan 2 10:08:03 2012
Lifetime writes: 48 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: e2dee7d6-aa56-4d29-926b-20909f69cd45
Journal backup: inode blocks
Journal features: (none)
Journal size: 32M
Journal length: 8192
Journal sequence: 0x00000001
Journal start: 0
You'll notice that 'df' reports my "Size" as larger than it should be.
If someone has an explanation for the 1008M "Size" I see I would appreciate it.
I've been thinking and what ranban282 said is true as well.
Quote:
Originally Posted by ranban282
Hi,
This occurs when your partition is partly corrupted. Since the discrepancy is small, you don't need to worry about it just yet.
If you have lost power to your system (or in some other manner have taken it down "hard"), you will end up with orphaned inodes. Corruption is a strong word. Damage less so. But either way, after a while you can accumulate a lot of orphaned inodes. The fsck that runs at startup is fairly worthless for repairing this (it just uses the journal for recovery); you need to repair it manually. One of the fields in the superblock is "First orphan inode:", but if damage is fresh that may not be filled in yet.
Truly over whelming amount of information... thank you guys for your time and effort
Quote:
Originally Posted by PTrenholme
Try df -H instead.
Code:
# df -H /dev/mapper/system-data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-data 321G 199G 106G 66% /mnt/DATA
Size is 321G?? But ofcourse -H uses power of 1000 rather than 1024, so i suppose the larger value is explained.
Quote:
Originally Posted by tommylovell
I'm curious. What is your LE size for that LVM LV? ('vgdisplay' will tell you. Please post that.) I can't work my way back to it from the LV Size of 303.00 GiB, and the Current LE of 77568 that you provided.
Also, could you do a 'dumpe2fs -h /dev/mapper/system-data' and post that?
Code:
# vgdisplay
--- Volume group ---
VG Name system
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 35
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 2
Act PV 2
VG Size 952.46 GiB
PE Size 4.00 MiB
Total PE 243829
Alloc PE / Size 83200 / 325.00 GiB
Free PE / Size 160629 / 627.46 GiB
VG UUID 1KQhFq-hnS3-Nxmy-0Q5g-S3bT-jYiy-S7re08
dumpe2fs -h /dev/mapper/system-data
dumpe2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: <none>
Last mounted on: /mnt/DATA
Filesystem UUID: e2991482-4c18-48aa-a1a8-e05cbb356d39
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 19779840
Block count: 79429632
Reserved block count: 3966156
Free blocks: 29626176
Free inodes: 19728987
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 554
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8160
Inode blocks per group: 510
RAID stride: 15264
Flex block group size: 16
Filesystem created: Thu Apr 15 08:05:39 2010
Last mount time: Thu Jul 7 23:25:03 2011
Last write time: Thu Jul 7 23:25:03 2011
Mount count: 3
Maximum mount count: -1
Last checked: Wed Jul 6 19:33:28 2011
Check interval: 0 (<none>)
Lifetime writes: 156 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 7c6a401e-3551-41aa-ad2b-5c3c0eba596f
Journal backup: inode blocks
Journal features: journal_incompat_revoke
Journal size: 128M
Journal length: 32768
Journal sequence: 0x0001bb81
Journal start: 1
Quote:
Originally Posted by tommylovell
If you have lost power to your system (or in some other manner have taken it down "hard"), you will end up with orphaned inodes. Corruption is a strong word. Damage less so. But either way, after a while you can accumulate a lot of orphaned inodes. The fsck that runs at startup is fairly worthless for repairing this (it just uses the journal for recovery); you need to repair it manually. One of the fields in the superblock is "First orphan inode:", but if damage is fresh that may not be filled in yet.
Yours isn't necessarily broken. You need to have the filesystem unmounted to actually repair it. That means bringing it up in rescue mode. There's a way to force an 'fsck' on reboot, but I don't know if that really forces a repair. (Without the '-f' flag, I think 'fsck' looks at the "Filesystem state:" in the superblock and if it says "clean" it skips doing the "deep repair" that '-f' gives you.
You can check it before hand on a mounted filesystem to see if it needs repair using 'fsck -fr /dev/mapper/system-data'.
Here's my root directory. It's suffered a little damage. It's great to have a journaling file system!
Code:
[root@athlon ~]# fsck -fn /dev/mapper/athlon-root
fsck from util-linux-ng 2.18
e2fsck 1.41.12 (17-May-2010)
Warning! /dev/mapper/athlon-root is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found. Fix? no
Inode 918626 was part of the orphaned inode list. IGNORED.
Inode 918738 was part of the orphaned inode list. IGNORED.
Deleted inode 1704349 has zero dtime. Fix? no
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: -(4449337--4449345) -4488217
Fix? no
Free blocks count wrong (13418811, counted=13371119).
Fix? no
Inode bitmap differences: -918626 -918738 -1704349
Fix? no
Free inodes count wrong (6424113, counted=6420290).
Fix? no
/dev/mapper/athlon-root: ********** WARNING: Filesystem still has errors **********
/dev/mapper/athlon-root: 129487/6553600 files (0.2% non-contiguous), 12795589/26214400 blocks
[root@athlon ~]#
I have some news on why 'df' is "off". I'll post that tomorrow when I organize it a little bit.
The bottom line is that 'df' gets its info from the statfs function call. It is coming from the kernel and not from the filesystem's superblock (at least not directly).
The information returned by statfs appears reasonable but is not the same as any field in the ext4 superblock, hence some of my confusion.
Basically, 'df' calls get_fs_usage (in fsusage.c) which makes the statfs function call. Here is what the struct looks like after the statfs call for the athlon-temp filesystem that I created to test with (shown earlier).
(A description of the statfs struct is in "man statfs" and, of course, in statfs.h.)
Code:
little big
hex endian endian decimal
field offset len value value value
f_type 0 8 53ef ef53 ext4 filesystem
f_bsize 8 8 0010 1000 4096 block size of filesystem
f_blocks 10 8 e6ef03 03efe6 258,022 total data blocks in file (maybe "data blocks" is the key)
f_bfree 18 8 a5ce03 03cea5 249,349 free blocks in filesystem
f_bavail 20 8 054802 024805 149,509 free blocks available to unprivileged user
f_files 28 8 000001 010000 65,636 total file nodes in filesystem
f_ffree 30 8 f5ff fff5 65,525 free file nodes in filesystem
f_blocks is described as "total data blocks in filesystem", not "total blocks in filesystem".
It's possible that f_blocks is the total blocks minus (at least some of) the overhead incurred when you format the file system.
Quote:
Originally Posted by tommylovell
You'll notice that 'df' reports my "Size" as larger than it should be.
If someone has an explanation for the 1008M "Size" I see I would appreciate it.
Wrong, wrong, wrong. That 1008M / 1032088 1K-blocks is correct - based on 258,022. 1024MB is a Gig, not 1000M. So 1008M is less than a Gig. Duh.
So the math is all good.
258022 total data blocks * 4 kb block size of filesystem = 1032088 1K-blocks.
1032088 / 1024 = 1007.8984375MB, rounded up to 1008MB
1032088 / (1024*1024) = 0.984275817871094GB or roughly .98GB
The only thing I don't understand is the ext4 superblock values
Code:
Block count: 262144
Free blocks: 249509
versus the statfs f_blocks value of 258022
Code:
[root@athlon coreutils]# df --block-size=4096 /mnt
Filesystem 4K-blocks Used Available Use% Mounted on
/dev/mapper/athlon-temp
258022 8513 149509 6% /mnt
[root@athlon coreutils]#
But since we're only talking about a 16MB discrepancy out of 1GB, I'm not going to chase this into the kernel.
Hehe... No probs... I am actually glad that there are people like you in the community, who writes lots of posts in a thread... thanks again for your time, effort and of course, for the heap of knowledge you shared...
So i suppose everything has tallied now??
The values seems to be ok... and the major difference is because of the disk space reserved for root...!!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.