Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Is there a way to defrag a vfat filesystem from Linux? Does fsck defrag for you? Can you use 'wine defrag.exe'?
I only have Windows on one system (I don't like rebooting that box just to defrag one drive) but I have several USB hard drives that I'd like to be able to defragment. I can't just switch them to ext2 because 1) some are in MP3 players and the firmware can't read ext2, and 2) I sometimes use them to backup Windows systems when I do repairs for people so they need to work in Win as well.
Use cp -p to copy all of the files to another partition. umount the original VFAT partition. Reformat the original VFAT file system using mkdosfs. mount the original VFAT partition. Use cp -p to copy all of the files back to the original partition. Delete the copied files on the second partition.
Is drive fragmentation actually presenting a compelling problem? In other words, do you feel actual pain? Or do you simply have the gut-feeling that defragmenting the drive is "a good thing to do?" If you feel no pain, then leave well-enough alone.
Even with poor ol' VFAT, you can actually leave a drive completely alone for years and it will probably take care of itself. Unless the drive is full or very nearly so, and unless many of the files on the drive grow substantially in size after they are created, fragmentation is probably not a compelling issue.
Since I frequently fill my drives by copying and deleting hundreds of backupfiles and/or music tracks I'm sure they're becoming quite fragmented, and while normally this wouldn't be a big deal, on my 80GB Neuros I notice a substancial differenced in battery life (a little more than an hour's difference) when the drive is very fragmented versus when it has been recently defragged.
If you keep a drive very fragmented it actually substantually lowers the life-time. Instead of working normally, It would have to read all over the place, it could lead to premature wear.
This is only a problem with fat/12/16/32 or ntfs not really a linux/unix specific issue. My problem however is that i need to defrag other drives and my multiboot flashdisk. (normally never want to defrag a ssd) however for the loop filesystem to work you need a contigious iso.
so I know, there is some reasons to defrag fat based FS while runnning linux. I would love to actually get the answer attempted.
their might be dicussion on when that need arises, but never the less without the ability it's a moot point. ( which i personally belive there most likely is a way and this is not a moot point)
So... has this ever been resolved? It is still an actual issue: for me, I have some "unmoveable" system files all over the place; and when I write a hiberfil.sys onto the system, it gets so fragmented it takes couple minutes for it to write, instead of the optimal 15 seconds... I am tempted to go the "move all files off and back on" route...
So... has this ever been resolved? It is still an actual issue: for me, I have some "unmoveable" system files all over the place; and when I write a hiberfil.sys onto the system, it gets so fragmented it takes couple minutes for it to write, instead of the optimal 15 seconds... I am tempted to go the "move all files off and back on" route...
If you go this way use the chance to format the drive with NTFS instead of FAT32, it fragments not so fast, don't has the 4GB file size limit and is much better as underlying system for Windows.
Heh, you don't know that I still use FAT16 because it is like twice as fast as FAT32 or NTFS... So no thanks to your offer. I just need it fully defragmented once, to lay down a contiguous hiberfil.sys, so I am not really concerned with filesystem fragmentation after that. After all, once I lay down the hiberfil.sys, the computer runs way faster than others' computers even on 50MB free space...
Heh, you don't know that I still use FAT16 because it is like twice as fast as FAT32 or NTFS... So no thanks to your offer. I just need it fully defragmented once, to lay down a contiguous hiberfil.sys, so I am not really concerned with filesystem fragmentation after that. After all, once I lay down the hiberfil.sys, the computer runs way faster than others' computers even on 50MB free space...
That's... kinda pitiful. Probably time for some new hardware.
Do you run XP or 2000? FAT16 has a maximum partition size of 2GB, and you can put your OS and your hiberfil.sys on that? You really must have a low spec machine.
That's... kinda pitiful. Probably time for some new hardware.
I bet my hardware is faster than yours...
Quote:
You really must have a low spec machine.
Actually, I have XP... Why would I have a low spec machine if I were trying to make it fast? The 2GB limit is only in Windows. If you format it in Unix, you can have 4GB. And then just put the beginning of the HIBERFIL.SYS before the 2GB mark, and it works just fine with 2GB RAM.
Last edited by Александръ; 06-02-2011 at 03:21 PM.
Well, unlike you I'm not resorting to FAT16 to squeeze performance out of my system. My systems are fast enough for my needs while still running modern filesystems like NTFS, JFS, or BTRFS.
My (only) Windows system is an Alienware M17x_R2. It's pretty fast. Fast enough that I can play my games with max settings at the native 1200p resolution. It uses NTFS on a 7200 RPM drive.
My daily workhorse is a Lenovo ThinkPad X300 running Slackware64 on an SSD (JFS, but I'm thinking about switching to BTRFS soon). Again, plenty fast for what need. Disk I/O is rarely a bottleneck for my workload.
I'm curious what you're doing that you need to squeeze performance out of a decade old OS with a hard drive that's limited to 4GB of storage, but even with that limited capacity disk I/O is apparently a bottleneck for you. If it was me, I'd probably use a 64-bit OS with 8GB of RAM and take 4GB of that to use as a RAM disk. Solves your I/O latency problem, since capacity is apparently not a concern.
Edit: I can understand if for some reason you have hardware constraints which necessitate squeezing performance where you can, but it sounds to me like newer hardware could improve your system more effectively than gimping it with an old OS and filesystem... unless you're just tweaking for the sake of tweaking. Like a ricer.
Last edited by DragonWisard; 06-02-2011 at 03:45 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.