Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
no, they put everything in order in the first place. couldn't give you a technical explanation tho, i just presume the drivers plan where to put files vastly more effectviely than fat32
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,233
Rep:
actually, i think the macintosh fragments as well, not just micro$oft OSes, linux uses the ext2 FS, whic uses inodes, not sure how it works, but my gues would be it doesn't leave empty spaces on the disk when files are removed. someone correct me if i'm wrong, ok?
ext2 was built mainly for speed - since hard disks are the lumbering dinosaurs of the computer world, ext2 takes advantage of the extra time available during hard drive spins to correctly allocate inodes so that fragmentation doesn't occur in the first place. that's part of the answer, anyway... goes hand in hand with the argument for complex memory management... if you want more specifics tho, you'll really need to read up on the allocation schemes for ext2 compared to some other fs's. it's not as easy or simple as it sounds.
Actually, I have a filesystem on an AIX 4.3 RS6000 43P that is fragmented and I'm planning on running 'defragfs' on it this weekend. I've never seen a Unix type OS fragment, but since the 'defragfs' tool exists and the disk is fragmented I'd say that it is possible and does happen on os's other than Micro$oft. Though this is obviously extremely rare.
If you look closely at UNIX file system you'll notice that there is actually fragmentation occurres on disks. UNIX sees disks as a collection of blocks of predefined size, assuming size of block to be equal 4K, if you want to store a 9K file it will take 3 blocks, 2 of 4K and another 4K for the remaining 1K of file, so here you'll get unused 3K, the next file will be saved at the begining of the next 4K (1 block) which makes space contiguous. But there is a difference between FAT system used by windows and ext2 system used by UNIX. All information about files in UNIX is stored in inodes. There is a table of inodes, and every address of this table is just a pointer to the actual file (everything in linux is a file - regular files, directories and spacial files like block and char devices), but it's not that simple in any way, an inode contains permissions, modification time, and file state as well, and it's more complex for special files.
Would it be a better idea, then, to have 1k blocks? Obviously the table for all inodes would be much bigger, but how much difference would it make on large harddisks (>10Gb)?
If most of your files are only a few kilobytes then you could save space but not speed by setting the block size to 1k. But if you have all big files then the closer the block size is to the average file size the more space you would save by having a smaller inode table.
On most computers the files vary greatly in size so it's best to choose a block size which works well in most situations. The default of 4k seems to work fine for most people. But if you have one partition which only stores large files you could probably win speed and space by setting the block size to something higher.
I don't know all the details on the filesystem so I can't say in numbers how it would affect a 10GB file. But just think that if you have a block size of 1k then each 1k will have to be indexed by a inode entry. That would be 1048576 inodes. Lets say each inode is 10 bytes. Then you would need 10MB just for the inode table. And I think they keep a backup copy of the inode table so you would have to at least double that. If you instead use 4k then you would use roughly 4 times as little space.
None of those are accurate numbers but it's just to give you an idea on how much difference it might make.
I read about this: The reason that ext2f drives do not fragment as much is that they are CYLINDER based, instead of windows' SECTOR based.
The only way to fragment it is to fill it up to near full capacity (so don't do that)
My theory is that since there are many more cylinders than sectors, they are much smaller. Therefore instead of big huge blocks of information being distributed all through the humongous sector, there are maybe one or two programs contained in a cylinder, much harder to fragment I'd say.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.