LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   defragment harddisk (https://www.linuxquestions.org/questions/linux-newbie-8/defragment-harddisk-4175492170/)

yukta 01-22-2014 11:21 AM

defragment harddisk
 
How can we defragment harddisk in linux. I am using fedora 14
please reply

strick1226 01-22-2014 12:18 PM

You're probably using ext3 as the filesystem and one of its features is that is doesn't become fragmented over time like many Windows-based filesystems.

In short, there aren't any "defragmentation" programs for linux as there is no need.

Hope this helps.

MaquinaX 01-22-2014 12:36 PM

Hello Yutka,
I find this to be some what funny, because I asked that same question when I first made my transition to Linux. If your harddrive is Formated under linux file system, such as ext2, ext3, ext4... or other linux variants, fragmentation does not occur because at the time that a file is created, it also allocates additional space for future changes, which actually keeps the file in location. If your system is acting slow, there may be other causes... such as a process taking
up all your memory, a program that didn't install properly, low system memory, or last... failing Hard Drive. Hope this helps.


MaquinaX

metaschima 01-22-2014 12:58 PM

I have noticed that with kernels 3.10.x and newer, they've changed something in filesystem or file handling that really does make defragmenting obsolete. There are plenty of scripts out there that can defragment, and I do use one, but with kernel 3.10.x it doesn't need to defrag anything, because it seems to be done automatically, or at least file placement is such that it is not needed.

So, I would agree that it is no longer needed with newer kernels.

jpollard 01-22-2014 06:00 PM

There hasn't been any need to defragment disks since version 1... and the Ext2 filesystem.

What happens is that the disk uses a more optimum allocation that minimizes fragmentation.

Fragmentation only becomes an issue if you operate the filesystem for long periods of time at 98% capacity or higher.

By default, Linux native filesystems prevent that (well, reduce the occurance) by reserving 10% of the disk capacity for the root user, and being able to relocate blocks itself. This allows a reasonable overhead that minimizes the impact of fragmentation (it keeps it under around 1.5%). The 10% amount is only a default, it can be tuned for TB filesystems (where 10% is 10G) and set to 1% for such filesystems (see tunefs).

Some of the newer filesystems (btrfs, ext4fs) have an internal defragmenter that does optimize storage, but it isn't necessary to invoke it as it is built in. There are commands to defragment, but I believe these are just passed to the filesystem code to reallocate data.

jefro 01-22-2014 08:08 PM

One could get into a fragmented situation. Kind of rare. I doubt you'd have an issue.

The suggested method is to tar off the data and return it. A backup and restore.

jpollard 01-22-2014 09:29 PM

The only time things got a bit dicey for me was when I filled the disk... Cleanup, get the free space down to 40/50 percent, and the fragmentation takes care of itself.

jpollard 01-22-2014 09:36 PM

The only time things got a bit dicey for me was when I filled the disk... Cleanup, get the free space down to 40/50 percent, and the fragmentation takes care of itself.

In my case, it was the root filesystem - and it was the logs that filled it. As the fragmented logs were deleted (age), the recovered space was no longer fragmented. I just needed to keep up with the logs better...


All times are GMT -5. The time now is 02:54 AM.