Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Reporting the inode size.. for every file is within a 4k block.. some will report it as 4k if its actually under that size, etc. The -h in your first command stands for human readable, so it just rounds up to the size of the inode the file exists in.
You'll notice that directories will show up as 4k blocks as well... when they're probably literally a bit or so in size..
But remember, different filesystems and how you formatted your system will show different results. Usually the output in -bytes is the most accurate count and or if its a larger output, in kilobytes will be more accurate.. I always notice when you do -human readable form.. it will round up or off to make the number easier to read instead of a bunch of decimal places, etc.
No, no... the -b option doesn't only give byte granularity, it also gives "apparent size". With just the -h option, "du" will report actual "disk usage".
Every file is composed of N blocks. A block is typically 4k bytes for a large enough drive, i.e. any modern drive.
However, most filesystems in Linux support what we call "sparse files".
If you open a file, seek to position X, and then write a byte, the apparent file size will be X, but the real space occupied on disk won't include blocks that were never written to, it will be just one block. Basically only file positions that have been written to will take up space on the disk, whereas positions that were never written will be read as all zeros.
Oops, I just read your post again and this wasn't your problem.
Your problem is that you're checking the size of a directory that is full of small files.
If you have a directory with 600 files with 10 bytes each, the apparent size is 6000 bytes, but the actual disk usage will be a block for each file: 600 * 4Kb = 2400Kb
Different filesystems will handle this differently, however. That is why we have so many
I remember that ReiserFS (I'm not sure if it was version 3 or 4) was trying to store small files together with the metadata information of the directory so that they wouldn't waste a full block, but I was never a ReiserFS fan.
If you use a lot of small files you can try to set the block size to a smaller value to have less "slack", at some performance cost. This isn't an easy operation if you don't have enough storage space to temporarily move all your files...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.