LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 06-06-2008, 06:45 AM   #1
sarah1
Member
 
Registered: May 2007
Posts: 44

Rep: Reputation: 15
Defragging Mac using Linux


I try to find Linux software, which can defrag Macintosh (10.5) hard drive.

File system is: Mac OS Extended (Journaled) GUID Partition Table

I know Apple don't recommend defragging Macs, but i need to do it anyway.

I would like to use Linux live cd and then defrag internal hard drive.
 
Old 06-06-2008, 07:18 AM   #2
MS3FGX
LQ Guru
 
Registered: Jan 2004
Location: NJ, USA
Distribution: Slackware, Debian
Posts: 5,852

Rep: Reputation: 361Reputation: 361Reputation: 361Reputation: 361
I don't believe that is possible with the native Linux tools. There is an fsck.hfsplus program, but that only checks and repairs filesystem errors, it doesn't have any support for defragmenting the volume.

You can build Apple's HFS tools for Linux, which may support defrag.

Really, Unix filesystems don't get fragmented like FAT/NTFS, so it isn't uncommon for there to be a lack of software designed to do it.
 
Old 06-06-2008, 08:36 AM   #3
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
You are very unlikely to have a fragmentation problem with the hfs filesystem.

You could have a problem if the disk is almost full so that the filesystem has little choice about where to put things.

If you really have a fragmentation problem, the best thing to do is to copy the entire filesystem to a new partition, then wipe the original partition, reformat it, and copy all the files back into place. At this time, defragmentation will automatically occur.

edit: this script claims it will defragment a filesystem adequately, if you really need that. There are some serious flaws in both the article and the concept, but under the right set of circumstances it'll work somewhat.

Last edited by jiml8; 06-06-2008 at 09:01 AM.
 
Old 06-06-2008, 09:26 AM   #4
Emerson
LQ Sage
 
Registered: Nov 2004
Location: Saint Amant, Acadiana
Distribution: Gentoo ~amd64
Posts: 7,661

Rep: Reputation: Disabled
I agree with both replies. Keep in mind if you need to consolidate free space defrag won't help you.
 
Old 06-06-2008, 06:32 PM   #5
oskar
Senior Member
 
Registered: Feb 2006
Location: Austria
Distribution: Ubuntu 12.10
Posts: 1,142

Rep: Reputation: 49
I haven't even defragged a ntfs drive yet. That was necessary with fat32, but not anymore.

Could you be more specific as to why you "need" to do it.

Last edited by oskar; 06-06-2008 at 06:34 PM.
 
Old 06-06-2008, 06:53 PM   #6
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
NTFS will fragment. It fragments quite badly, in fact.
 
Old 06-06-2008, 08:15 PM   #7
i92guboj
Gentoo support team
 
Registered: May 2008
Location: Lucena, Córdoba (Spain)
Distribution: Gentoo
Posts: 4,083

Rep: Reputation: 405Reputation: 405Reputation: 405Reputation: 405Reputation: 405
Quote:
Originally Posted by MS3FGX View Post
I don't believe that is possible with the native Linux tools. There is an fsck.hfsplus program, but that only checks and repairs filesystem errors, it doesn't have any support for defragmenting the volume.
Defragmenting means putting all the data at the beginning and all the blank space at the end. So, as long as you can write to the volume, you can defragment it. Just make a backup of the whole volume, erase it's contents and then restore it. Voilą. Defragmenters are absurd, and they are only needed when you want to operate on a live volume that is mounted. But since the OP seems to want to do it from a livecd, then that's fine.

Note however that I know ZERO about mac filesystems. This means that:

1.- I am not aware if that filesystem has any peculiarity like files that NEED to be on a concrete section of the disk. In that case, you definitely don't want to use the erase/restore technique.

2.- I have absolutely no clue at all if there's write support for that filesystem. And if there is, I have absolutely no idea how stable it is. So, you might very well end with a garbaged filesystem unless you know what you are doing.


Quote:
Really, Unix filesystems don't get fragmented like FAT/NTFS, so it isn't uncommon for there to be a lack of software designed to do it.

Quote:
Originally Posted by jiml8 View Post
NTFS will fragment. It fragments quite badly, in fact.
True. This things keep arising in one thread after another in all the linux forums all over the net.

All the filesystems will fragment, no matter how good or advanced they are. That's not the point. The difference is how fragmentation is handled. In linux the i/o schedulers do a very good job, and fragmentation (besides being low) is pointless. A filesystem will perform ok even if fragmented, unless it's really full (or it's reiserfs 3.x, in which case you are out of luck by all means).

To the OP: I am also curious why do you think that you NEED to defragment.

Last edited by i92guboj; 06-06-2008 at 08:17 PM.
 
Old 06-06-2008, 08:41 PM   #8
MS3FGX
LQ Guru
 
Registered: Jan 2004
Location: NJ, USA
Distribution: Slackware, Debian
Posts: 5,852

Rep: Reputation: 361Reputation: 361Reputation: 361Reputation: 361
HFS+ write support is not yet fully matured in the kernel; so you wouldn't be able to simply copy everything off and put it back on without jumping through some unfortunate hoops. He would need to disable journaling on his current volume, for one.

Beyond that of course, copying to another volume and putting it back on requires a spare volume of equal size which is obviously not always available.
 
Old 06-06-2008, 10:09 PM   #9
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
Originally Posted by i92guboj View Post
Defragmenting means putting all the data at the beginning and all the blank space at the end.
Defragmenting means ensuring that files are continuous on the disk, within continuous extents. Most defraggers will also reorganize so that the data is at the beginning and the free space is at the end, but this not only is not a requirement, it may not be optimal depending on the overall system architecture. Raxco PerfectDisk as an example, divides the data, putting it at both ends and moving the free space to the middle. If the drive has only one partition, this is more efficient.

Quote:
Defragmenters are absurd, and they are only needed when you want to operate on a live volume that is mounted.
This is often the case though.

Quote:
All the filesystems will fragment, no matter how good or advanced they are. That's not the point. The difference is how fragmentation is handled. In linux the i/o schedulers do a very good job, and fragmentation (besides being low) is pointless. A filesystem will perform ok even if fragmented, unless it's really full (or it's reiserfs 3.x, in which case you are out of luck by all means).

To the OP: I am also curious why do you think that you NEED to defragment.
The real issue is how badly they will fragment. *nix filesystems tend to fragment very little because of the architectural choice to allocate spare space to files that might grow and to seek a big enough chunk of free space on the HD to place the whole file to begin with.

Windows filesystems (including NTFS - which can fragment very very badly and very very quickly) tend to fill up all space at the beginning of the drive, even if this requires fragmenting a file to fill multiple small spaces.

The resulting performance hit can be very profound as disk I/O (always a bottleneck anyway) slows to a crawl with the head hunting all over the drive to reconstruct each file.
 
Old 06-06-2008, 10:27 PM   #10
i92guboj
Gentoo support team
 
Registered: May 2008
Location: Lucena, Córdoba (Spain)
Distribution: Gentoo
Posts: 4,083

Rep: Reputation: 405Reputation: 405Reputation: 405Reputation: 405Reputation: 405
Quote:
Originally Posted by jiml8 View Post
Defragmenting means ensuring that files are continuous on the disk, within continuous extents.
Yes. I thought that was obvious. Once file after another also means "one entire file after another" for me. But you are right and I thank you for explicitly stating that.

Quote:
Most defraggers will also reorganize so that the data is at the beginning and the free space is at the end, but this not only is not a requirement, it may not be optimal depending on the overall system architecture. Raxco PerfectDisk as an example, divides the data, putting it at both ends and moving the free space to the middle. If the drive has only one partition, this is more efficient.
That I didn't know. So much thanks for pointing it out.


Quote:
The real issue is how badly they will fragment. *nix filesystems tend to fragment very little because of the architectural choice to allocate spare space to files that might grow and to seek a big enough chunk of free space on the HD to place the whole file to begin with.
both things are importante, but I think that the i/o schedulers' effects are much more noticeable, and I say this because I get no noticeable performance decrease even on highly fragmented devices. Though it's true that in linux the concept of "highly fragmented" is much different of the windows one.

In linux you need to do something insane to get as bad as to a 25% of fragmentation.

Quote:
Windows filesystems (including NTFS - which can fragment very very badly and very very quickly) tend to fill up all space at the beginning of the drive, even if this requires fragmenting a file to fill multiple small spaces.
True.

Quote:
The resulting performance hit can be very profound as disk I/O (always a bottleneck anyway) slows to a crawl with the head hunting all over the drive to reconstruct each file.
That's heavily influenced by the i/o scheduler. With a good i/o scheduler, the operations are rearranged in a saner way, so, virtually the file will be read as sequentially as possible, instead of letting the heads bounce like mad.

Another thing I like about this is that, in which regards hard disks, linux is not only better in performance, but it also makes your drives last longer because it doesn't idiotically burn the heads of your drives to read stuff when the disk is fragmented (and you don't have to defragment, which saves even more r/w cycles).

I doubt that hfs is any worse in this regard. That's why I was wondering why the original poster *needs* that much to defragment anything.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Is defragging necessary when *Windows* modifies an ext3 partition? jgombos Linux - General 12 01-16-2008 09:54 AM
Defragging XFS RedShirt Linux - Software 2 12-31-2005 12:41 PM
a round about way of defragging Linux, and at the same time, creating a system backup mrfixit1951 Linux - General 3 11-08-2005 06:29 AM
defragging? pharmd Linux - Newbie 2 02-27-2005 03:03 AM
Why does linux not need defragging? tearinox Linux - General 3 10-11-2004 06:59 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 12:22 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration