DOC diskusage reading on embedded linux filesystem
Linux - Embedded & Single-board computerThis forum is for the discussion of Linux on both embedded devices and single-board computers (such as the Raspberry Pi, BeagleBoard and PandaBoard). Discussions involving Arduino, plug computers and other micro-controller like devices are also welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
DOC diskusage reading on embedded linux filesystem
Hi LQ members,
I just join today to this forum, with a question in my head
I have this embedded linux board. It's already with busybox. The disk storage is around 4GB (disk on chip). It's partitioned in to 2 (50MB and the remaining). It's formatted with EXT3 linux filesystem.
I would like to know from you guys if you know the explanation for this:
i pumped in data to 2nd partition, 3208MB, and the diskusage is 97%
i removed dataset above and pumped in another dataset to 2nd partition, 3178MB, and the diskusage shows 98%
Why is the reading of diskusage like that? In 2nd attemp i pumped in lesser data but the diskusage percentage was greater. I used "df -kh".
Disk space is allocated in blocks -- usually 4096 byte blocks for ext3, if I recall correctly. The size used by each file is rounded up to an integer number of blocks. (Certain filesystems can pack multiple short blocks into one physical block, so the above description does not apply to all filesystems. The ext* family of filesystems does not do that, though.)
This means that all files between 0 and 4096 bytes will always take up one block per file, 4096 bytes of disk space. All files between 4097 and 8192 bytes will take two blocks per file, 8192 bytes of disk space. And so on.
For example, on a typical ext filesystem with 4096 byte blocks, 100 files each being say 1 to 100 bytes in length will need twice the amount of disk space, than 50 files each being say 4000 to 4096 bytes in length. This is because in both cases, each file will need one block.
In your case, the latter set might just have had a lot more files, although their sizes were smaller, so they needed a larger number of blocks (including partially filled blocks) -- and therefore a larger percentage of the available disk space.
Does this answer your question?
Last edited by Nominal Animal; 12-13-2011 at 10:29 PM.
Disk space is allocated in blocks -- usually 4096 byte blocks for ext3, if I recall correctly. The size used by each file is rounded up to an integer number of blocks. (Certain filesystems can pack multiple short blocks into one physical block, so the above description does not apply to all filesystems. The ext* family of filesystems does not do that, though.)
This means that all files between 0 and 4096 bytes will always take up one block per file, 4096 bytes of disk space. All files between 4097 and 8192 bytes will take two blocks per file, 8192 bytes of disk space. And so on.
For example, on a typical ext filesystem with 4096 byte blocks, 100 files each being say 1 to 100 bytes in length will need twice the amount of disk space, than 50 files each being say 4000 to 4096 bytes in length. This is because in both cases, each file will need one block.
Yes,, i ve read this somewhere
Quote:
In your case, the latter set might just have had a lot more files, although their sizes were smaller, so they needed a larger number of blocks (including partially filled blocks) -- and therefore a larger percentage of the available disk space.
... i didn't know that one, thanks for letting me know.
Do you think other posibility make sense? Ie. some spaces were actually occupied by a process (eg. tffs process that commit the data to DOC) as a swap/temp partition. Meaning, by the time i did "df -kh" the swap/temp area was not cleared yet. Or some spaces were occupied by filesystem journal? -- i could not find out how to prove it though
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.