LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 03-27-2010, 07:41 AM   #16
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398

Quote:
if you were loading a new RHEL server and needed to make ext3 partitions with a total size of 4GB and 6GB EXACTLY, what number would you enter for the MBs prompt on the disk druid screen during the creation of said partition
4GB = 4096MB, so you would enter 4096 for the MB requested in disk druid. I think this is what you have been (correctly) doing.

But the output of df -h will still read as 3.9GB because df is telling you the available filespace, which is the space for storing files, not the partition size which is the size of the space on the disk allocated to your filesystem (ext3 in your case).

To check your partition sizes use fdisk -l /dev/sda

You will see the partition sizes listed as a number of blocks, which on my system are blocks of 1024 bytes. So, for me, the partition size is the number of blocks allocated to the partition x 1024.

Hope this helps.
 
Old 03-27-2010, 03:45 PM   #17
Tinkster
Moderator
 
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,066
Blog Entries: 11

Rep: Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910
Quote:
Originally Posted by jamescondron View Post
Christ, they'll give anyone a Linux job nowadays.

You want the installer to ask you for disk sizes in GB instead of MB, right? Perhaps you ought to consider why the installer is asking in MB as opposed to GB. If you can't answer it then dip into what the CD is doing and change it in source.

Or consider learning how many bits in a byte, how many bytes in a kilobyte, etc., etc.

If all you want is for your df outputs to look prettier, you may be in the wrong profession.

Can you please keep personal attacks out of this?
 
Old 03-29-2010, 09:12 AM   #18
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
Thanks for the reply tredegar. Looking at my fdisk -l for the new server, i have 4192933+ blocks for my 3.9GB partitions.

Looking at the other server i was using as comparison here, which shows 4GB in the df output, it has 4192933+. Maybe those file systems aren't ext3 like I was told? How can i verify that?
 
Old 03-29-2010, 09:43 AM   #19
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Ok, so the partitions are the same size but df is reporting different amounts of space.

You can tell how the partition is mounted with the command
mount
all on its own.

You will get a list of what is mounted where, and it'll also tell you the filesystem type:
Code:
/dev/sda5 on / type ext3 (rw,relatime,errors=remount-ro)
Maybe the filesystems are different.
 
Old 03-29-2010, 09:48 AM   #20
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
both servers say the same thing for /home and /tmp, which show as 3.9GB in the df on the server i loaded, 4.0GB on the server someone else did.

/dev/sda7 on /home type ext3 (rw)
/dev/sda6 on /tmp type ext3 (rw)

perhaps the other server was "tuned" and it has a little more space making it show 4GB?
 
Old 03-29-2010, 09:55 AM   #21
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Quote:
perhaps the other server was "tuned" and it has a little more space making it show 4GB?
perhaps it was. So try
Code:
tune2fs -l /dev/sda7
and the same for the other partition. You'll be able to see what was "tuned", and compare them.
 
Old 03-29-2010, 10:20 AM   #22
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
From server showing 4.0GB

Code:
tune2fs 1.35 (28-Feb-2004)
Filesystem volume name:   /tmp1
Last mounted on:          <not available>
Filesystem UUID:          45d60059-c1ff-4cf6-a9c1-5c5cef4a94f9
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              524288
Block count:              1048233
Reserved block count:     52411
Free blocks:              1021388
Free inodes:              524173
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      255
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Filesystem created:       Fri Jun  9 08:56:25 2006
Last mount time:          Wed Jul  2 10:32:19 2008
Last write time:          Wed Jul  2 10:32:19 2008
Mount count:              22
Maximum mount count:      -1
Last checked:             Fri Jun  9 08:56:25 2006
Check interval:           0 (<none>)
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
First orphan inode:       20
Default directory hash:   tea
Directory Hash Seed:      350e6651-0917-4a4f-bfa8-4d97bd037ed6
Journal backup:           inode blocks
From server i just loaded showing 3.9GB

Code:
tune2fs 1.39 (29-May-2006)
Filesystem volume name:   /tmp
Last mounted on:          <not available>
Filesystem UUID:          f6936de8-868e-41dd-8318-bbad842bfaaa
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1048576
Block count:              1048233
Reserved block count:     52411
Free blocks:              996931
Free inodes:              1048561
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      255
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         32768
Inode blocks per group:   1024
Filesystem created:       Fri Mar 26 06:25:07 2010
Last mount time:          Fri Mar 26 06:52:56 2010
Last write time:          Fri Mar 26 06:52:56 2010
Mount count:              5
Maximum mount count:      -1
Last checked:             Fri Mar 26 06:25:07 2010
Check interval:           0 (<none>)
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
Default directory hash:   tea
Directory Hash Seed:      7ac97ae8-f4ba-45c5-a18a-f2a244ba4d30
Journal backup:           inode blocks
 
Old 03-29-2010, 11:19 AM   #23
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Code Blocks

The only real difference (apart from dates, UUIDs and stuff) is this:
Code:
Inodes per group:         16384
Inode blocks per group:   512

Inodes per group:         32768
Inode blocks per group:   1024
And I doubt that would make such a difference.

The "4GB" server looks as though it might be quite old (tune2fs dates from 2004, and 2006 on the other server), and I wonder if, in fact the difference might be due to the different versions of df running on each machine. df -h gives "Human-friendly" output, and it could be that one version of df is saying "It's 3.9GB, but this is for a human, so keep it simple and just round it up to 4GB"

df is often aliased to df -h so please unalias it first, just in case, and then use it, unaliased on each server:
Code:
unalias df
#  if you get bash: unalias: df: not found, just ignore it
df
Now the space that was shown like this
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5              42G   30G   11G  74% /
is shown like this
Code:
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda5             43535604  30571248  10770284  74% /
Are the sizes reported differently when listed as blocks?

There is likely to be a small difference anyway because of the "Inodes per group" stuff.
 
Old 03-29-2010, 11:41 AM   #24
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
code blocks are nice, now that i know how to use them ;-)

I was wondering the same thing, since the 4GB server hasn't been updated in quite some time, wonder if that makes it report different.

But upon your suggestion, the 1K-blocks is indeed different. 4061540 for my 3.9GB server, 4127076 for the 4GB server.

Another thing i noticed in comparing the info I put in code blocks, is that the two difference are exactly half the other one. Not sure what that means, if anything
 
Old 03-29-2010, 12:22 PM   #25
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Quote:
the 1K-blocks is indeed different. 4061540 for my 3.9GB server, 4127076 for the 4GB server.
Well, that's the answer. From your tune2fs listings:
Code:
                   "4.0"       "3.9"
=======================================
Inode Count      524288      1048576
Inode Size          128          128
So, Inodes use       64MB        128MB
And 4127076 - 4061540 = 65536 Blocks of 1K
Which is 64MB
Which is exactly the difference between the filesystems.

One filesystem has an extra 64MB of inodes, so there is less space to store data in. Which is pretty much the answer I gave you way back at post#2.

Still, it has been a useful exercise

AFAIK you cannot change the number of inodes "on the fly". You have to reformat the partition with mke2fs and expressly state the number of inodes you'd like, rather than accept the default.
 
1 members found this post helpful.
Old 03-29-2010, 12:42 PM   #26
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
Thanks. At least I understand now what happened, which is the best part of all this. Is there a benefit or problem caused by having more or less inodes?
 
Old 03-29-2010, 12:52 PM   #27
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
I am not a filesystem expert but basically:
More inodes = more ways to reference many (possibly smaller) files, but takes up more space on your partition (as we have just discovered).

Fewer inodes = uses less space on your partition, but you may suddenly run out of free inodes and therefore be unable to save a file because although there is room for the file data, there's no inode to reference it.

mke2fs, if left to its own devices, generally makes sensible decisions about how many inodes it should be allocating, which is why I leave it to its defaults.
 
Old 03-29-2010, 12:57 PM   #28
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
I would just assume leave it at the defaults also, and I'm very surprised that the person who loaded the other server would have changed it, as he's a very "defaults" type guy. is it possible defaults might have changed in different versions of disk druid and that's where the discrepency came up?

I appreciate all the help with this.
 
Old 03-29-2010, 01:09 PM   #29
tredegar
LQ 5k Club
 
Registered: May 2003
Location: London, UK
Distribution: Debian "Jessie"
Posts: 6,085

Rep: Reputation: 398Reputation: 398Reputation: 398Reputation: 398
Quote:
is it possible defaults might have changed in different versions of disk druid
The developers tune the filesystems all the time: ext4 is probably stable enough to use now, even if you care about your data, though I am happy to wait another few months.

So, yes, it's certainly possible that in the intervening time, something changed in the way mke2fs guesses the "best" optimisations. No doubt after a magnificent row between the developers about which was the "most optimised optimisation".

But I do not think this is something that needs worrying about. A filesystem of 4.0 or 3.9-and-a-bit GB doesn't matter as long as "it works". And, as I said before, if in doubt "more is better".

Time to move on, I think.
 
Old 03-29-2010, 01:24 PM   #30
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,668

Original Poster
Rep: Reputation: 46
Yeah, definitely time to move on, I just asked more out of curiousity. Thanks again for all your help.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
disk druid gmvdburg Linux - Newbie 3 02-17-2010 07:44 PM
where is Disk Druid? DataSheet Linux - Newbie 4 01-25-2007 11:47 AM
Disk Druid Agileuk Linux - Newbie 1 10-22-2003 08:06 AM
Disk Druid ? herambshembekar Linux - General 2 03-01-2002 10:17 PM
I don't know how to use disk druid. Nathaniel Ong Linux - Software 1 02-10-2001 04:40 AM


All times are GMT -5. The time now is 08:18 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration