LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Desktop
User Name
Password
Linux - Desktop This forum is for the discussion of all Linux Software used in a desktop context.

Notices


Reply
  Search this Thread
Old 07-06-2011, 09:23 AM   #1
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Rep: Reputation: 37
df -h report does not seem to add up


I am using LVM on my system. In my system a LV data is mounted whose lvdisplay shos:
Code:
# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/system/data
  VG Name                system
  LV UUID                PWxv0a-zRUd-b3yP-bDCk-2W2L-L9Nk-PvKbsC
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                303.00 GiB
  Current LE             77568
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0
Notice the size of LV to be 303GiB
The problem is that df -h shows the following:
Code:
# df -h /dev/mapper/system-data 
Filesystem               Size  Used Avail Use% 
/dev/mapper/system-data  299G  186G   98G  66%
Notice that when a 303GiB logical volume is mounted, it shows size as 299GB which is also NOT equal to the Used + Avail shown.

Why does this happen?? Is this normal??
I am using ext4

Thanks in advance
 
Old 07-06-2011, 09:54 AM   #2
ranban282
LQ Newbie
 
Registered: Jul 2006
Location: Hyderabad
Distribution: Fedora 8
Posts: 28

Rep: Reputation: 2
Hi,
This occurs when your partition is partly corrupted. Since the discrepancy is small, you don't need to worry about it just yet.
 
Old 07-06-2011, 11:02 AM   #3
PTrenholme
Senior Member
 
Registered: Dec 2004
Location: Olympia, WA, USA
Distribution: Fedora, (K)Ubuntu
Posts: 4,187

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
Try df -H instead.
 
Old 07-06-2011, 11:12 AM   #4
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Quote:
Why does this happen??
Sometimes it's rounding; sometimes overhead. Or a combination of the two.

As you are using ext4, I assume you have a fairly modern system and are not running into some sort of old algorithm deficiency... But that's not out of the question.

Quote:
Is this normal??
Yes. The numbers between fdisk (if your filesystem is directly on a partition) or LVM, and the various reports of space used ('df', 'dumpe2fs -h', etc.), often differ. Maybe "always differ" would be more accurate.

I'm curious. What is your LE size for that LVM LV? ('vgdisplay' will tell you. Please post that.) I can't work my way back to it from the LV Size of 303.00 GiB, and the Current LE of 77568 that you provided.

Also, could you do a 'dumpe2fs -h /dev/mapper/system-data' and post that?

Again, just curious. Thanks.
 
Old 07-06-2011, 12:24 PM   #5
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,700

Rep: Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895
By default an ext2/3/4 reserves 5% for root. This is to allow root to login in case the file system becomes full and reduce fragmentation. Reserved space does not appear as used so the numbers do not add up. If the LV is just for data then it is safe to change reserve space to zero. The difference between the LV size and file system size is due to overhead. Overhead consists of the inode table and all of the superblocks.

Last edited by michaelk; 07-06-2011 at 12:27 PM.
 
1 members found this post helpful.
Old 07-06-2011, 12:29 PM   #6
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Blog Entries: 5

Rep: Reputation: Disabled
Quote:
Originally Posted by michaelk
By default an ext2/3/4 reserves 5% for root.
There's the right answer. Also note that the reserved blocks can be tweaked using tune2fs(8). FYI, from its manpages:

Code:
Reserving some number  of  filesystem
blocks for use by privileged processes is done to avoid filesys‐
tem fragmentation, and to allow system  daemons,  such  as  sys‐
logd(8),  to continue to function correctly after non-privileged
processes are prevented from writing to  the  filesystem.   Nor‐
mally, the default percentage of reserved blocks is 5%.
 
Old 07-06-2011, 09:29 PM   #7
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
That's the right answer but only for this part of the post
Quote:
...it shows size as 299GB which is also NOT equal to the Used + Avail shown.
The "Reserved block count:" only effects the "Avail" statistic (and indirectly the "Use%") in the 'df' command output.

The "Size" statistic in the 'df' command has always been hard to fathom.

So, here's an experiment:

Allocate an LVM LV.
Code:
[root@athlon ~]# lvcreate -L1G -n temp athlon
  Logical volume "temp" created
The 1.00GiB allocation is 32 Logical extents that are 32.00 MIb.
Code:
[root@athlon ~]# lvdisplay /dev/mapper/athlon-temp 
  --- Logical volume ---
  LV Name                /dev/athlon/temp
  VG Name                athlon
  LV UUID                qUhOa6-wBID-HUxc-PCgY-JKo8-bSY1-kBomIP
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
   

[root@athlon ~]# vgdisplay 
  --- Volume group ---
  VG Name               athlon
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  10
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TiB
  PE Size               32.00 MiB
  Total PE              44709
  Alloc PE / Size       13504 / 422.00 GiB
  Free  PE / Size       31205 / 975.16 GiB
  VG UUID               N3dKtB-3VYX-hK9n-AbtL-Y5jQ-CnXo-XbLkHh
Note: LVM does good math. "-L1G" allocates 32 32.00MiB PE's. 32*32MB=1024MB. 1MiB = 1024*1024 = 1,048,576 like it should be. 1GiB = 1024*1024*1024 = 1,073,741,824.

Next, format it with ext4 and default parameters;

Code:
[root@athlon ~]# mkfs.ext4 /dev/mapper/athlon-temp 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Note: Good math here. "262144" blocks of "Block size=4096" = 1,073,741,824.
1,073,741,824 is 1GiB. (1024 * 1024 * 1024 = 1,073,741,824.)

Finally, do some displays.

Code:
[root@athlon ~]# mount /dev/mapper/athlon-temp /mnt

[root@athlon ~]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/athlon-temp
                     1008M   34M  924M   4% /mnt
1008M?

Code:
[root@athlon ~]# df  /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/athlon-temp
                       1032088     34052    945608   4% /mnt
1032088 1K-blocks?

There is no clue in a dump of the superblock.

Code:
[root@athlon ~]# dumpe2fs -h /dev/mapper/athlon-temp
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          e8b127b1-3e3e-496f-bde3-797c5c23a0a0
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              65536
Block count:              262144
Reserved block count:     13107
Free blocks:              249509
Free inodes:              65525
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      63
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Jul  6 11:08:03 2011
Last mount time:          Wed Jul  6 11:08:33 2011
Last write time:          Wed Jul  6 11:08:33 2011
Mount count:              1
Maximum mount count:      21
Last checked:             Wed Jul  6 11:08:03 2011
Check interval:           15552000 (6 months)
Next check after:         Mon Jan  2 10:08:03 2012
Lifetime writes:          48 MB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      e2dee7d6-aa56-4d29-926b-20909f69cd45
Journal backup:           inode blocks
Journal features:         (none)
Journal size:             32M
Journal length:           8192
Journal sequence:         0x00000001
Journal start:            0


Now another experiment:

Change the reserved blocks.

Code:
[root@athlon ~]# tune2fs -r 100000 /dev/mapper/athlon-temp 
tune2fs 1.41.12 (17-May-2010)
Setting reserved blocks count to 100000
The "Size" is, of course, unchanged; "Used" is unchanged (the 34M is overhead - the top directory, inodes, journal, etc.).
"Avail" reflects the reduction caused by the increased reserved number of blocks.
Code:
[root@athlon ~]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/athlon-temp
                     1008M   34M  585M   6% /mnt
All that's changed in the superblock is the "Reserved block count".
Code:
[root@athlon ~]# dumpe2fs -h /dev/mapper/athlon-temp
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          e8b127b1-3e3e-496f-bde3-797c5c23a0a0
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              65536
Block count:              262144
Reserved block count:     100000
Free blocks:              249509
Free inodes:              65525
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      63
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Jul  6 11:08:03 2011
Last mount time:          Wed Jul  6 11:08:33 2011
Last write time:          Wed Jul  6 14:40:44 2011
Mount count:              1
Maximum mount count:      21
Last checked:             Wed Jul  6 11:08:03 2011
Check interval:           15552000 (6 months)
Next check after:         Mon Jan  2 10:08:03 2012
Lifetime writes:          48 MB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      e2dee7d6-aa56-4d29-926b-20909f69cd45
Journal backup:           inode blocks
Journal features:         (none)
Journal size:             32M
Journal length:           8192
Journal sequence:         0x00000001
Journal start:            0
You'll notice that 'df' reports my "Size" as larger than it should be.

If someone has an explanation for the 1008M "Size" I see I would appreciate it.
 
Old 07-07-2011, 08:23 AM   #8
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
I've been thinking and what ranban282 said is true as well.
Quote:
Originally Posted by ranban282 View Post
Hi,
This occurs when your partition is partly corrupted. Since the discrepancy is small, you don't need to worry about it just yet.
If you have lost power to your system (or in some other manner have taken it down "hard"), you will end up with orphaned inodes. Corruption is a strong word. Damage less so. But either way, after a while you can accumulate a lot of orphaned inodes. The fsck that runs at startup is fairly worthless for repairing this (it just uses the journal for recovery); you need to repair it manually. One of the fields in the superblock is "First orphan inode:", but if damage is fresh that may not be filled in yet.
 
Old 07-07-2011, 09:39 AM   #9
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora
Posts: 3,935
Blog Entries: 5

Rep: Reputation: Disabled
@tommylovell: I don't have an explanation for you, but your musings are legit. (Good observations.)
 
Old 07-07-2011, 01:37 PM   #10
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Truly over whelming amount of information... thank you guys for your time and effort

Quote:
Originally Posted by PTrenholme View Post
Try df -H instead.
Code:
# df -H /dev/mapper/system-data 
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-data  321G  199G  106G  66% /mnt/DATA
Size is 321G?? But ofcourse -H uses power of 1000 rather than 1024, so i suppose the larger value is explained.

Quote:
Originally Posted by tommylovell View Post
I'm curious. What is your LE size for that LVM LV? ('vgdisplay' will tell you. Please post that.) I can't work my way back to it from the LV Size of 303.00 GiB, and the Current LE of 77568 that you provided.

Also, could you do a 'dumpe2fs -h /dev/mapper/system-data' and post that?
Code:
# vgdisplay
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  35
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               952.46 GiB
  PE Size               4.00 MiB
  Total PE              243829
  Alloc PE / Size       83200 / 325.00 GiB
  Free  PE / Size       160629 / 627.46 GiB
  VG UUID               1KQhFq-hnS3-Nxmy-0Q5g-S3bT-jYiy-S7re08


dumpe2fs -h /dev/mapper/system-data
dumpe2fs 1.41.14 (22-Dec-2010)
Filesystem volume name:   <none>
Last mounted on:          /mnt/DATA
Filesystem UUID:          e2991482-4c18-48aa-a1a8-e05cbb356d39
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              19779840
Block count:              79429632
Reserved block count:     3966156
Free blocks:              29626176
Free inodes:              19728987
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      554
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8160
Inode blocks per group:   510
RAID stride:              15264
Flex block group size:    16
Filesystem created:       Thu Apr 15 08:05:39 2010
Last mount time:          Thu Jul  7 23:25:03 2011
Last write time:          Thu Jul  7 23:25:03 2011
Mount count:              3
Maximum mount count:      -1
Last checked:             Wed Jul  6 19:33:28 2011
Check interval:           0 (<none>)
Lifetime writes:          156 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      7c6a401e-3551-41aa-ad2b-5c3c0eba596f
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x0001bb81
Journal start:            1
Quote:
Originally Posted by tommylovell View Post
If you have lost power to your system (or in some other manner have taken it down "hard"), you will end up with orphaned inodes. Corruption is a strong word. Damage less so. But either way, after a while you can accumulate a lot of orphaned inodes. The fsck that runs at startup is fairly worthless for repairing this (it just uses the journal for recovery); you need to repair it manually. One of the fields in the superblock is "First orphan inode:", but if damage is fresh that may not be filled in yet.
So how can it be fixed???

Last edited by firewiz87; 07-08-2011 at 11:19 PM.
 
Old 07-07-2011, 11:43 PM   #11
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Quote:
Originally Posted by firewiz87 View Post
...
PE Size 4.00 MiB
...
Thanks, firewiz87 for the info. The 4MB PE Size makes sense. I don't know why it didn't before... Brain fart, I guess.

With this,
Quote:
LV Size 303.00 GiB
Current LE 77568
the math works:

77568 LE's * 4 MB PE Size = 310,272 MB
310,272 MB / 1024 = 303 GB

Quote:
Originally Posted by firewiz87 View Post
So how can it be fixed???
Yours isn't necessarily broken. You need to have the filesystem unmounted to actually repair it. That means bringing it up in rescue mode. There's a way to force an 'fsck' on reboot, but I don't know if that really forces a repair. (Without the '-f' flag, I think 'fsck' looks at the "Filesystem state:" in the superblock and if it says "clean" it skips doing the "deep repair" that '-f' gives you.

You can check it before hand on a mounted filesystem to see if it needs repair using 'fsck -fr /dev/mapper/system-data'.

Here's my root directory. It's suffered a little damage. It's great to have a journaling file system!
Code:
[root@athlon ~]# fsck -fn /dev/mapper/athlon-root 
fsck from util-linux-ng 2.18
e2fsck 1.41.12 (17-May-2010)
Warning!  /dev/mapper/athlon-root is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found.  Fix? no

Inode 918626 was part of the orphaned inode list.  IGNORED.
Inode 918738 was part of the orphaned inode list.  IGNORED.
Deleted inode 1704349 has zero dtime.  Fix? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences:  -(4449337--4449345) -4488217
Fix? no

Free blocks count wrong (13418811, counted=13371119).
Fix? no

Inode bitmap differences:  -918626 -918738 -1704349
Fix? no

Free inodes count wrong (6424113, counted=6420290).
Fix? no


/dev/mapper/athlon-root: ********** WARNING: Filesystem still has errors **********

/dev/mapper/athlon-root: 129487/6553600 files (0.2% non-contiguous), 12795589/26214400 blocks
[root@athlon ~]#
I have some news on why 'df' is "off". I'll post that tomorrow when I organize it a little bit.
 
Old 07-10-2011, 07:49 PM   #12
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Ok. Last post. I promise.

The bottom line is that 'df' gets its info from the statfs function call. It is coming from the kernel and not from the filesystem's superblock (at least not directly).

The information returned by statfs appears reasonable but is not the same as any field in the ext4 superblock, hence some of my confusion.

Basically, 'df' calls get_fs_usage (in fsusage.c) which makes the statfs function call. Here is what the struct looks like after the statfs call for the athlon-temp filesystem that I created to test with (shown earlier).

Code:
---------> *fsd* <--------- (120 bytes from 0x7fff30cf9d20)
        +0          +4          +8          +c            0   4   8   c   
+0000   53 ef 00 00 00 00 00 00 00 10 00 00 00 00 00 00   S...............
+0010   e6 ef 03 00 00 00 00 00 a5 ce 03 00 00 00 00 00   ................
+0020   05 48 02 00 00 00 00 00 00 00 01 00 00 00 00 00   .H..............
+0030   f5 ff 00 00 00 00 00 00 55 52 5e cd 62 1d e9 cf   ........UR^.b...
+0040   ff 00 00 00 00 00 00 00 00 10 00 00 00 00 00 00   ................
+0050   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
+0060   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
+0070   00 00 00 00 00 00 00 00                           ........
(A description of the statfs struct is in "man statfs" and, of course, in statfs.h.)
Code:
                         little   big
          hex            endian   endian   decimal
field     offset   len   value    value    value
f_type       0      8    53ef     ef53              ext4 filesystem
f_bsize      8      8    0010     1000     4096     block size of filesystem
f_blocks    10      8    e6ef03   03efe6   258,022  total data blocks in file  (maybe "data blocks" is the key)
f_bfree     18      8    a5ce03   03cea5   249,349  free blocks in filesystem
f_bavail    20      8    054802   024805   149,509  free blocks available to unprivileged user
f_files     28      8    000001   010000    65,636  total file nodes in filesystem
f_ffree     30      8    f5ff     fff5      65,525  free file nodes in filesystem
f_blocks is described as "total data blocks in filesystem", not "total blocks in filesystem".
It's possible that f_blocks is the total blocks minus (at least some of) the overhead incurred when you format the file system.

Quote:
Originally Posted by tommylovell View Post
You'll notice that 'df' reports my "Size" as larger than it should be.

If someone has an explanation for the 1008M "Size" I see I would appreciate it.
Wrong, wrong, wrong. That 1008M / 1032088 1K-blocks is correct - based on 258,022. 1024MB is a Gig, not 1000M. So 1008M is less than a Gig. Duh.

So the math is all good.

258022 total data blocks * 4 kb block size of filesystem = 1032088 1K-blocks.

1032088 / 1024 = 1007.8984375MB, rounded up to 1008MB

1032088 / (1024*1024) = 0.984275817871094GB or roughly .98GB

The only thing I don't understand is the ext4 superblock values
Code:
Block count:              262144
Free blocks:              249509
versus the statfs f_blocks value of 258022
Code:
[root@athlon coreutils]# df --block-size=4096 /mnt
Filesystem           4K-blocks      Used Available Use% Mounted on
/dev/mapper/athlon-temp
                        258022      8513    149509   6% /mnt
[root@athlon coreutils]#
But since we're only talking about a 16MB discrepancy out of 1GB, I'm not going to chase this into the kernel.

Happy Motoring.
 
2 members found this post helpful.
Old 07-11-2011, 12:31 PM   #13
firewiz87
Member
 
Registered: Jan 2006
Distribution: OpenSUSE 11.2, OpenSUSE 11.3,Arch
Posts: 240

Original Poster
Rep: Reputation: 37
Quote:
Originally Posted by tommylovell View Post
Ok. Last post. I promise.
Hehe... No probs... I am actually glad that there are people like you in the community, who writes lots of posts in a thread... thanks again for your time, effort and of course, for the heap of knowledge you shared...

So i suppose everything has tallied now??
The values seems to be ok... and the major difference is because of the disk space reserved for root...!!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Add-On Compatibility Reporter Lets You Test and Report Firefox Extensions LXer Syndicated Linux News 0 02-16-2010 11:50 AM
generating a report in Squid's report generator - Sarg mandrakeBren Linux - Software 0 11-03-2009 06:59 AM
LXer: LD Port Report 1.12 update, Can now Follow CDP neighbors during the report opti LXer Syndicated Linux News 0 09-18-2009 01:11 PM
Lire (log analysis, log report) no report in Mandriva 2005 LE (desktop usage) Emmanuel_uk Mandriva 0 01-16-2006 02:11 AM
How can I report the Error Report? domeili Linux - Newbie 1 10-30-2003 05:42 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Desktop

All times are GMT -5. The time now is 02:42 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration