Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
if you were loading a new RHEL server and needed to make ext3 partitions with a total size of 4GB and 6GB EXACTLY, what number would you enter for the MBs prompt on the disk druid screen during the creation of said partition
4GB = 4096MB, so you would enter 4096 for the MB requested in disk druid. I think this is what you have been (correctly) doing.
But the output of df -h will still read as 3.9GB because df is telling you the available filespace, which is the space for storing files, not the partition size which is the size of the space on the disk allocated to your filesystem (ext3 in your case).
To check your partition sizes use fdisk -l /dev/sda
You will see the partition sizes listed as a number of blocks, which on my system are blocks of 1024 bytes. So, for me, the partition size is the number of blocks allocated to the partition x 1024.
You want the installer to ask you for disk sizes in GB instead of MB, right? Perhaps you ought to consider why the installer is asking in MB as opposed to GB. If you can't answer it then dip into what the CD is doing and change it in source.
Or consider learning how many bits in a byte, how many bytes in a kilobyte, etc., etc.
If all you want is for your df outputs to look prettier, you may be in the wrong profession.
The only real difference (apart from dates, UUIDs and stuff) is this:
Inodes per group: 16384
Inode blocks per group: 512
Inodes per group: 32768
Inode blocks per group: 1024
And I doubt that would make such a difference.
The "4GB" server looks as though it might be quite old (tune2fs dates from 2004, and 2006 on the other server), and I wonder if, in fact the difference might be due to the different versions of df running on each machine. df -h gives "Human-friendly" output, and it could be that one version of df is saying "It's 3.9GB, but this is for a human, so keep it simple and just round it up to 4GB"
df is often aliased to df -h so please unalias it first, just in case, and then use it, unaliased on each server:
# if you get bash: unalias: df: not found, just ignore it
Now the space that was shown like this
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 42G 30G 11G 74% /
is shown like this
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 43535604 30571248 10770284 74% /
Are the sizes reported differently when listed as blocks?
There is likely to be a small difference anyway because of the "Inodes per group" stuff.
I am not a filesystem expert but basically:
More inodes = more ways to reference many (possibly smaller) files, but takes up more space on your partition (as we have just discovered).
Fewer inodes = uses less space on your partition, but you may suddenly run out of free inodes and therefore be unable to save a file because although there is room for the file data, there's no inode to reference it.
mke2fs, if left to its own devices, generally makes sensible decisions about how many inodes it should be allocating, which is why I leave it to its defaults.
I would just assume leave it at the defaults also, and I'm very surprised that the person who loaded the other server would have changed it, as he's a very "defaults" type guy. is it possible defaults might have changed in different versions of disk druid and that's where the discrepency came up?
is it possible defaults might have changed in different versions of disk druid
The developers tune the filesystems all the time: ext4 is probably stable enough to use now, even if you care about your data, though I am happy to wait another few months.
So, yes, it's certainly possible that in the intervening time, something changed in the way mke2fs guesses the "best" optimisations. No doubt after a magnificent row between the developers about which was the "most optimised optimisation".
But I do not think this is something that needs worrying about. A filesystem of 4.0 or 3.9-and-a-bit GB doesn't matter as long as "it works". And, as I said before, if in doubt "more is better".