Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
There are in fact tools that you can use to defrag a hard disk on Linux if you have a burning desire to use them: http://www.hecticgeek.com/2012/10/de...defrag-ubuntu/
I have been using Linux for 7 years. I have 2 hard drives in my desktop computer. The first is for my Linux operating systems. The second is for my data.
The second hard drive is formatted in ext3 and has been in use for several years.
Although the second hard drive is well past it's prime, I have never noticed any decrease in performance in all the time that I have been using it. It has never been defragmented.
Write back if you need more help.
And welcome to the LQ forums!
EDIT: After posting I noticed that jpollard posted his answer as I was composing my own answer. Anyway, this only serves to confirm what I have posted here.
To defrag, the old way was to use a command to put all the data to a tape drive. Next morning, you'd put it back. That process put all the files back in order. There are a few commands still that can be used to defrag. 90% of people say you don't need to, that may be true. Some server admins may have to do it once in a while on heavy use systems. Their systems need to be fast, and have optimum performance.
Scandisk is the match to fsck or such for each type of filesystem in use. It is very important to know and learn how to use.
Offtopic, but anyways: Ubuntu 10.10 is not supported anymore since April 2012, you should upgrade to a supported version, so that you get security updates and bugfixes.
To defrag, the old way was to use a command to put all the data to a tape drive. Next morning, you'd put it back. That process put all the files back in order. There are a few commands still that can be used to defrag. 90% of people say you don't need to, that may be true. Some server admins may have to do it once in a while on heavy use systems. Their systems need to be fast, and have optimum performance.
In almost 20 years of UNIX/Linux administration, I haven't seen ANY administrator need to defrag a disk (not since Sun OS 3.2), as long as their disks were properly sized. If you need to defrag (even on heavily used systems) you have failed - because the downtime/lost time due to the defrag only makes the system more heavily used.
Quote:
Scandisk is the match to fsck or such for each type of filesystem in use. It is very important to know and learn how to use.
Last time I checked (my wife has to use it) it could take over 10 hours to defrag a NTFS disk, and during that time the system was effectively unusable.
I guess we all ought to welcome the jcCampbell to LQ. I also agree that 10.10 may be a poor choice to be using.
You are more than welcome to do as you wish. Like I have said, many an admin has used tar to a tape every night and many an admin had looked at their system indept for issues. I have used tools to reduce fragmentation on heavy use systems. I have used these methods for decades on mainframe and bsd/linux. The defrag issue was all solved by daily tape backups on big and small systems. It was at one time the only proper MS solution to it's server products.
So saying it takes 10 hours so you don't do it? I'd never wait that long either. I'd reload the OS or get more storage. If one did wait long enough to run scandisk and if they did render it useless they'd have much more issues. You ought to run scandisk more often on a schedule. Could it be that your disk is more than 70% full or is the system so old or full of spyware or such?
As a common user, one doesn't usually need to contend with any defrag on newer ext4 filesystems. Ext2 may not be so forgiving.
...
You are more than welcome to do as you wish. Like I have said, many an admin has used tar to a tape every night and many an admin had looked at their system indept for issues. I have used tools to reduce fragmentation on heavy use systems. I have used these methods for decades on mainframe and bsd/linux. The defrag issue was all solved by daily tape backups on big and small systems. It was at one time the only proper MS solution to it's server products.
left out the "restore" step in defragging...
Defragging fails when you have a filesystem with 300+ TB of disk. So does a backup/restore as it takes a day (or longer) to do just a 16TB filesystem (a small filesystem as far as UNIX servers go). One AIX server I worked with had 10TB just for /tmp, but 3 30TB filesystems for other data (the OS was separate and I don't remember how big that was, but it wasn't very large - 10/15 GB fro each node or thereabouts). Backups of system files, yes... but only because the system configuration was on a configurations server, and system files could be reinstalled from an install server faster (about 30 minutes, and one install server could handle 24 nodes simultaneously). Updates to the system were relatively easy (at least for me, I didn't have to do them - the IBM CE did but the updates were applied to console system (a special node), which then pushed the update to a collection of install servers. Then those servers would update their list of nodes. It could take about 8 hours for a full install - as I recall only 12 nodes were designated as install servers, and they had to wait for the filesystem servers to be updated before the rest of the nodes received their updates. With a total of 310 nodes (10 were file servers) there was a delay while groups of 24 nodes were updated at a time. And the filesystems could take a while to be checked (jfs was pretty good at it). But defrag? never. Just add enough disks (or delete enough files to get 20-30% free space) and any fragmentation would take care of itself.
Quote:
So saying it takes 10 hours so you don't do it? I'd never wait that long either. I'd reload the OS or get more storage. If one did wait long enough to run scandisk and if they did render it useless they'd have much more issues. You ought to run scandisk more often on a schedule. Could it be that your disk is more than 70% full or is the system so old or full of spyware or such?
No - doesn't work. She does a defrag roughly once a month but leaves it running overnight, and a scandisk when things get really slow. As I recall from the last run, the disk is only about 60% used normally.
Quote:
As a common user, one doesn't usually need to contend with any defrag on newer ext4 filesystems. Ext2 may not be so forgiving.
Ext2 was just as forgiving, though the fragmentation level could be between 7-10%, with 80% used. I have never needed to defrag since Ext2. Performance was excellent, and the observed fragmentation was recovered just by deleting a few files. When the free space was reclaimed, the tail end of files would also be repacked, so partial sector allocations would get reclaimed by packing tail end of files together. Files could get fragmented, a bit, but new files would not be a problem.
Including a number that indicate it won't work with certain usages of filesystems.
Most appear to only be applicable to relatively small static filesystems to optimize rotational delay - which doesn't work well with logical volume use. In fact, defragmentation could destroy the performance of a logical volume - by overloading a single part. Fragmenting a file among the underlying volumes would then improve performance.
Last time I checked (my wife has to use it) it could take over 10 hours to defrag a NTFS disk, and during that time the system was effectively unusable.
Use a different defrag program, for example O&O Defrag is able to do defragging in the background without noticeable impacts on performance.
O&O defrag has a free version for private use, so no expense is added. I never had problems with updates with that software, but of course it is up to you if you want use it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.