disk defrag and temp files
Does Linux have a need for and system tools for hard drive disk defrag and disk clean up for temporary files such as is in the Windows XP ? Useing Ubuntu 6.06 and OpenSUSE 10.2with the ext3 file system.
|
there are some defraggers out there, but you don't need them. As for logs, most distro's delete them at boot, but check your docs for information.
|
What allows file systems like ext3 to not require defragmenting? Does it auto-defragment on the fly?
|
ext3 is one of those file systems that is poorly supported when it comes to defragging. Then again, it's true that it suffers far less fragmentation than, say, fat32 because of the different approaches these systems have to storing data.
To put it simply, a file system can be represented as a notebook. Ext 3 will use only lines that are completely free, while fat32 tends to fit things in wherever it finds a spot - and since these spots are frequently too small to hold all of a piece of data, the remaining bit(s) will be placed elsewhere on the disk. Data being spread out all over the disk, that's precisely what fragmentation is all about. The more your disk heads need to move to gather data, the poorer the performance. Now, if you know this, you"ll understand that ext3 can fragment as badly as fat 32: when there is little space left on a partition (=no more fresh lines),it will effectively be forced to break up data and store them wherever it finds "lines" that are not full yet. Under such conditions, it behaves like fat32 - in fact, worse, considering it's more difficult to defragment again. |
I see then that in Linux a defragmenter is less needed. This is also partially because there are most allways a lot less programs installed and or uninstalled in Linux as in Windows. Thanks for the information on that.
|
ext3 does not need defrag. It uses a journal instead of writing directly to the disk in sequence. Hence, FAT and NTFS. ext3 will first put all writes onto a hidden journal or unofficial section of files on the hard disk. It will then write the journaled file onto memory and then onto the true disk blocks, but in linear mode, taking it's time, in the background. Because it uses a journal files are written in linear mode instead of sequentially in little bits and pieces all over the place. The tradeoff is that the journal causes a lot more delays in writting to disk. However, reads are faster because the file is all in one piece within neighboring blocks. It makes sense, faster reads are more important than writes slower. I don't care how long it takes for my computer to finally write the file to disk but I do care that I can open it quickly in one swop without the disk head having to look for the file in several different areas. Hence FAT and NTFS. So ext3 writes all files already defraged. You don't need defrat in ext3 - defrag always sounded silly to me. Somehow it reminds me of the parts in really old movies where the guy has to hand-crank his Ford Model-T's to start.
|
Quote:
I'm from Windows, so please bear with me ... From what you described here, it sounds like ext3 simply caches the to-be-written files to the hidden journal location, then writes them to the disk storage when it has time. But I don't understand how that helps keep the files defragmented. I don't understand journalling, but from what I've read elsewhere it sounds like its more of a data redundancy facility, rather than for defragmentation. So, I've got this scenario: 10 byte hard drive (sounds rediculous, but keeps it simple). There is a 5 byte file sequentially written to the drive starting at byte #1, and a 3 byte file sequentially written starting at byte #7. This leaves only bytes #6 and #10 available for writing. How Windows does it: The file is saved in fragments into the #6 and #10 byte locations. How does ext3 handle this type of situation? Does it automatically defragment in the background, then save the new, perhaps journalled, files defragmented? |
From this page: http://www.salmar.com/pipermail/wftl...ch/000603.html
Quote:
|
Quote:
|
Quote:
I have noticed a lot less thrashing, I would even say, almost no noticeable thrashing in Linux, even when I use a lot of big multimedia GUI apps. |
Thank you. That clarifies some things for me.
For me, sometimes reading threads raises more questions for me, whether they be technical or philosophical. I appreciate your willingness to answer. |
Question I have been woundering. If indeed the way ext2 seems to access data on the drive and allows for faster reads, why then does it seem to take forever long for apps to load in linux?
Not bashing Linux, but Windows XP and all programs I use in Windows load light years faster than Linux and apps in Linux. I am using openSuse 10.2 and I am new to linux in general so I am sure there are tweaks to be had to speed things up, but was just woundering if part of the slow application load times could be caused from not so fast disk reads as was claimed above. Also...probably not the thread for this, but I have been having programs acts as if they were loading ( clicked the icon / mouse cursor changed showing something was going on) but after maybe 30 secs, nothing loads and the cursor changes back as if I never clicked the link to start the program in the first place. Does anyone know what could cause this? |
Quote:
When I was taking my first class on Solaris system administration I asked the teacher the same question. I was used to Windows and VMS and they both required a lot of defragmenting. The teacher said that it was due to the inode structure of the file system. Frankly I didn't buy it then and I don't buy it now. On the other hand I don't see Unix or Linux systems performance degrade over time as would be evident if the disks were getting very fragmented. :confused: |
Quote:
|
Performance actually involves more than just hard drives, of course. MSware tends to pre-load a lot of stuff in RAM - which does make for faster access - but it also tends to waste resources unless you are aware of all that is going.
|
All times are GMT -5. The time now is 01:59 PM. |