Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I don't believe that is possible with the native Linux tools. There is an fsck.hfsplus program, but that only checks and repairs filesystem errors, it doesn't have any support for defragmenting the volume.
You can build Apple's HFS tools for Linux, which may support defrag.
Really, Unix filesystems don't get fragmented like FAT/NTFS, so it isn't uncommon for there to be a lack of software designed to do it.
You are very unlikely to have a fragmentation problem with the hfs filesystem.
You could have a problem if the disk is almost full so that the filesystem has little choice about where to put things.
If you really have a fragmentation problem, the best thing to do is to copy the entire filesystem to a new partition, then wipe the original partition, reformat it, and copy all the files back into place. At this time, defragmentation will automatically occur.
edit: this script claims it will defragment a filesystem adequately, if you really need that. There are some serious flaws in both the article and the concept, but under the right set of circumstances it'll work somewhat.
I don't believe that is possible with the native Linux tools. There is an fsck.hfsplus program, but that only checks and repairs filesystem errors, it doesn't have any support for defragmenting the volume.
Defragmenting means putting all the data at the beginning and all the blank space at the end. So, as long as you can write to the volume, you can defragment it. Just make a backup of the whole volume, erase it's contents and then restore it. Voilą. Defragmenters are absurd, and they are only needed when you want to operate on a live volume that is mounted. But since the OP seems to want to do it from a livecd, then that's fine.
Note however that I know ZERO about mac filesystems. This means that:
1.- I am not aware if that filesystem has any peculiarity like files that NEED to be on a concrete section of the disk. In that case, you definitely don't want to use the erase/restore technique.
2.- I have absolutely no clue at all if there's write support for that filesystem. And if there is, I have absolutely no idea how stable it is. So, you might very well end with a garbaged filesystem unless you know what you are doing.
Quote:
Really, Unix filesystems don't get fragmented like FAT/NTFS, so it isn't uncommon for there to be a lack of software designed to do it.
Quote:
Originally Posted by jiml8
NTFS will fragment. It fragments quite badly, in fact.
True. This things keep arising in one thread after another in all the linux forums all over the net.
All the filesystems will fragment, no matter how good or advanced they are. That's not the point. The difference is how fragmentation is handled. In linux the i/o schedulers do a very good job, and fragmentation (besides being low) is pointless. A filesystem will perform ok even if fragmented, unless it's really full (or it's reiserfs 3.x, in which case you are out of luck by all means).
To the OP: I am also curious why do you think that you NEED to defragment.
HFS+ write support is not yet fully matured in the kernel; so you wouldn't be able to simply copy everything off and put it back on without jumping through some unfortunate hoops. He would need to disable journaling on his current volume, for one.
Beyond that of course, copying to another volume and putting it back on requires a spare volume of equal size which is obviously not always available.
Defragmenting means putting all the data at the beginning and all the blank space at the end.
Defragmenting means ensuring that files are continuous on the disk, within continuous extents. Most defraggers will also reorganize so that the data is at the beginning and the free space is at the end, but this not only is not a requirement, it may not be optimal depending on the overall system architecture. Raxco PerfectDisk as an example, divides the data, putting it at both ends and moving the free space to the middle. If the drive has only one partition, this is more efficient.
Quote:
Defragmenters are absurd, and they are only needed when you want to operate on a live volume that is mounted.
This is often the case though.
Quote:
All the filesystems will fragment, no matter how good or advanced they are. That's not the point. The difference is how fragmentation is handled. In linux the i/o schedulers do a very good job, and fragmentation (besides being low) is pointless. A filesystem will perform ok even if fragmented, unless it's really full (or it's reiserfs 3.x, in which case you are out of luck by all means).
To the OP: I am also curious why do you think that you NEED to defragment.
The real issue is how badly they will fragment. *nix filesystems tend to fragment very little because of the architectural choice to allocate spare space to files that might grow and to seek a big enough chunk of free space on the HD to place the whole file to begin with.
Windows filesystems (including NTFS - which can fragment very very badly and very very quickly) tend to fill up all space at the beginning of the drive, even if this requires fragmenting a file to fill multiple small spaces.
The resulting performance hit can be very profound as disk I/O (always a bottleneck anyway) slows to a crawl with the head hunting all over the drive to reconstruct each file.
Defragmenting means ensuring that files are continuous on the disk, within continuous extents.
Yes. I thought that was obvious. Once file after another also means "one entire file after another" for me. But you are right and I thank you for explicitly stating that.
Quote:
Most defraggers will also reorganize so that the data is at the beginning and the free space is at the end, but this not only is not a requirement, it may not be optimal depending on the overall system architecture. Raxco PerfectDisk as an example, divides the data, putting it at both ends and moving the free space to the middle. If the drive has only one partition, this is more efficient.
That I didn't know. So much thanks for pointing it out.
Quote:
The real issue is how badly they will fragment. *nix filesystems tend to fragment very little because of the architectural choice to allocate spare space to files that might grow and to seek a big enough chunk of free space on the HD to place the whole file to begin with.
both things are importante, but I think that the i/o schedulers' effects are much more noticeable, and I say this because I get no noticeable performance decrease even on highly fragmented devices. Though it's true that in linux the concept of "highly fragmented" is much different of the windows one.
In linux you need to do something insane to get as bad as to a 25% of fragmentation.
Quote:
Windows filesystems (including NTFS - which can fragment very very badly and very very quickly) tend to fill up all space at the beginning of the drive, even if this requires fragmenting a file to fill multiple small spaces.
True.
Quote:
The resulting performance hit can be very profound as disk I/O (always a bottleneck anyway) slows to a crawl with the head hunting all over the drive to reconstruct each file.
That's heavily influenced by the i/o scheduler. With a good i/o scheduler, the operations are rearranged in a saner way, so, virtually the file will be read as sequentially as possible, instead of letting the heads bounce like mad.
Another thing I like about this is that, in which regards hard disks, linux is not only better in performance, but it also makes your drives last longer because it doesn't idiotically burn the heads of your drives to read stuff when the disk is fragmented (and you don't have to defragment, which saves even more r/w cycles).
I doubt that hfs is any worse in this regard. That's why I was wondering why the original poster *needs* that much to defragment anything.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.