Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
if you were loading a new RHEL server and needed to make ext3 partitions with a total size of 4GB and 6GB EXACTLY, what number would you enter for the MBs prompt on the disk druid screen during the creation of said partition
4GB = 4096MB, so you would enter 4096 for the MB requested in disk druid. I think this is what you have been (correctly) doing.
But the output of df -h will still read as 3.9GB because df is telling you the available filespace, which is the space for storing files, not the partition size which is the size of the space on the disk allocated to your filesystem (ext3 in your case).
To check your partition sizes use fdisk -l /dev/sda
You will see the partition sizes listed as a number of blocks, which on my system are blocks of 1024 bytes. So, for me, the partition size is the number of blocks allocated to the partition x 1024.
You want the installer to ask you for disk sizes in GB instead of MB, right? Perhaps you ought to consider why the installer is asking in MB as opposed to GB. If you can't answer it then dip into what the CD is doing and change it in source.
Or consider learning how many bits in a byte, how many bytes in a kilobyte, etc., etc.
If all you want is for your df outputs to look prettier, you may be in the wrong profession.
Thanks for the reply tredegar. Looking at my fdisk -l for the new server, i have 4192933+ blocks for my 3.9GB partitions.
Looking at the other server i was using as comparison here, which shows 4GB in the df output, it has 4192933+. Maybe those file systems aren't ext3 like I was told? How can i verify that?
The only real difference (apart from dates, UUIDs and stuff) is this:
Code:
Inodes per group: 16384
Inode blocks per group: 512
Inodes per group: 32768
Inode blocks per group: 1024
And I doubt that would make such a difference.
The "4GB" server looks as though it might be quite old (tune2fs dates from 2004, and 2006 on the other server), and I wonder if, in fact the difference might be due to the different versions of df running on each machine. df -h gives "Human-friendly" output, and it could be that one version of df is saying "It's 3.9GB, but this is for a human, so keep it simple and just round it up to 4GB"
df is often aliased to df -h so please unalias it first, just in case, and then use it, unaliased on each server:
Code:
unalias df
# if you get bash: unalias: df: not found, just ignore it
df
Now the space that was shown like this
Code:
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 42G 30G 11G 74% /
is shown like this
Code:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 43535604 30571248 10770284 74% /
Are the sizes reported differently when listed as blocks?
There is likely to be a small difference anyway because of the "Inodes per group" stuff.
code blocks are nice, now that i know how to use them ;-)
I was wondering the same thing, since the 4GB server hasn't been updated in quite some time, wonder if that makes it report different.
But upon your suggestion, the 1K-blocks is indeed different. 4061540 for my 3.9GB server, 4127076 for the 4GB server.
Another thing i noticed in comparing the info I put in code blocks, is that the two difference are exactly half the other one. Not sure what that means, if anything
the 1K-blocks is indeed different. 4061540 for my 3.9GB server, 4127076 for the 4GB server.
Well, that's the answer. From your tune2fs listings:
Code:
"4.0" "3.9"
=======================================
Inode Count 524288 1048576
Inode Size 128 128
So, Inodes use 64MB 128MB
And 4127076 - 4061540 = 65536 Blocks of 1K
Which is 64MB
Which is exactly the difference between the filesystems.
One filesystem has an extra 64MB of inodes, so there is less space to store data in. Which is pretty much the answer I gave you way back at post#2.
Still, it has been a useful exercise
AFAIK you cannot change the number of inodes "on the fly". You have to reformat the partition with mke2fs and expressly state the number of inodes you'd like, rather than accept the default.
Thanks. At least I understand now what happened, which is the best part of all this. Is there a benefit or problem caused by having more or less inodes?
I am not a filesystem expert but basically:
More inodes = more ways to reference many (possibly smaller) files, but takes up more space on your partition (as we have just discovered).
Fewer inodes = uses less space on your partition, but you may suddenly run out of free inodes and therefore be unable to save a file because although there is room for the file data, there's no inode to reference it.
mke2fs, if left to its own devices, generally makes sensible decisions about how many inodes it should be allocating, which is why I leave it to its defaults.
I would just assume leave it at the defaults also, and I'm very surprised that the person who loaded the other server would have changed it, as he's a very "defaults" type guy. is it possible defaults might have changed in different versions of disk druid and that's where the discrepency came up?
is it possible defaults might have changed in different versions of disk druid
The developers tune the filesystems all the time: ext4 is probably stable enough to use now, even if you care about your data, though I am happy to wait another few months.
So, yes, it's certainly possible that in the intervening time, something changed in the way mke2fs guesses the "best" optimisations. No doubt after a magnificent row between the developers about which was the "most optimised optimisation".
But I do not think this is something that needs worrying about. A filesystem of 4.0 or 3.9-and-a-bit GB doesn't matter as long as "it works". And, as I said before, if in doubt "more is better".
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.