LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 03-06-2002, 04:30 PM   #1
glock19
Member
 
Registered: Aug 2001
Distribution: Debian Etch
Posts: 510

Rep: Reputation: 32
defragment linux


Do linux disks need to be defragmented?
 
Old 03-06-2002, 04:31 PM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984Reputation: 1984
no, they put everything in order in the first place. couldn't give you a technical explanation tho, i just presume the drivers plan where to put files vastly more effectviely than fat32
 
Old 03-06-2002, 04:40 PM   #3
Aussie
Senior Member
 
Registered: Sep 2001
Location: Brisvegas, Antipodes
Distribution: Slackware
Posts: 4,590

Rep: Reputation: 58
Fragmented drives are a microsoft invention, and people have just become used to them. Real OS's don't fragment (TM).
 
Old 03-06-2002, 05:46 PM   #4
frieza
Senior Member
 
Registered: Feb 2002
Location: harvard, il
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,233

Rep: Reputation: 406Reputation: 406Reputation: 406Reputation: 406Reputation: 406
actually, i think the macintosh fragments as well, not just micro$oft OSes, linux uses the ext2 FS, whic uses inodes, not sure how it works, but my gues would be it doesn't leave empty spaces on the disk when files are removed. someone correct me if i'm wrong, ok?
 
Old 03-06-2002, 06:02 PM   #5
isajera
Senior Member
 
Registered: Jun 2001
Posts: 1,635

Rep: Reputation: 45
ext2 was built mainly for speed - since hard disks are the lumbering dinosaurs of the computer world, ext2 takes advantage of the extra time available during hard drive spins to correctly allocate inodes so that fragmentation doesn't occur in the first place. that's part of the answer, anyway... goes hand in hand with the argument for complex memory management... if you want more specifics tho, you'll really need to read up on the allocation schemes for ext2 compared to some other fs's. it's not as easy or simple as it sounds.
 
Old 03-06-2002, 06:19 PM   #6
d3funct
Member
 
Registered: Jun 2001
Location: Centralia, WA
Posts: 274

Rep: Reputation: 31
Actually, I have a filesystem on an AIX 4.3 RS6000 43P that is fragmented and I'm planning on running 'defragfs' on it this weekend. I've never seen a Unix type OS fragment, but since the 'defragfs' tool exists and the disk is fragmented I'd say that it is possible and does happen on os's other than Micro$oft. Though this is obviously extremely rare.
 
Old 03-06-2002, 06:21 PM   #7
neo77777
LQ Addict
 
Registered: Dec 2001
Location: Brooklyn, NY
Distribution: *NIX
Posts: 3,704

Rep: Reputation: 56
If you look closely at UNIX file system you'll notice that there is actually fragmentation occurres on disks. UNIX sees disks as a collection of blocks of predefined size, assuming size of block to be equal 4K, if you want to store a 9K file it will take 3 blocks, 2 of 4K and another 4K for the remaining 1K of file, so here you'll get unused 3K, the next file will be saved at the begining of the next 4K (1 block) which makes space contiguous. But there is a difference between FAT system used by windows and ext2 system used by UNIX. All information about files in UNIX is stored in inodes. There is a table of inodes, and every address of this table is just a pointer to the actual file (everything in linux is a file - regular files, directories and spacial files like block and char devices), but it's not that simple in any way, an inode contains permissions, modification time, and file state as well, and it's more complex for special files.
 
Old 03-08-2002, 05:43 AM   #8
Thymox
Senior Member
 
Registered: Apr 2001
Location: Plymouth, England.
Distribution: Mostly Debian based systems
Posts: 4,368

Rep: Reputation: 64
Would it be a better idea, then, to have 1k blocks? Obviously the table for all inodes would be much bigger, but how much difference would it make on large harddisks (>10Gb)?
 
Old 03-08-2002, 06:33 AM   #9
Mik
Senior Member
 
Registered: Dec 2001
Location: The Netherlands
Distribution: Ubuntu
Posts: 1,316

Rep: Reputation: 47
If most of your files are only a few kilobytes then you could save space but not speed by setting the block size to 1k. But if you have all big files then the closer the block size is to the average file size the more space you would save by having a smaller inode table.
On most computers the files vary greatly in size so it's best to choose a block size which works well in most situations. The default of 4k seems to work fine for most people. But if you have one partition which only stores large files you could probably win speed and space by setting the block size to something higher.
I don't know all the details on the filesystem so I can't say in numbers how it would affect a 10GB file. But just think that if you have a block size of 1k then each 1k will have to be indexed by a inode entry. That would be 1048576 inodes. Lets say each inode is 10 bytes. Then you would need 10MB just for the inode table. And I think they keep a backup copy of the inode table so you would have to at least double that. If you instead use 4k then you would use roughly 4 times as little space.
None of those are accurate numbers but it's just to give you an idea on how much difference it might make.
 
Old 06-05-2004, 03:13 PM   #10
pyro05x
LQ Newbie
 
Registered: Jun 2004
Location: Kansas
Distribution: Mandrake 9.2 Download Edition
Posts: 1

Rep: Reputation: 0
Question

Does XFS fragment files?
 
Old 06-05-2004, 03:19 PM   #11
borrrden
Member
 
Registered: May 2004
Location: Philadelphia
Distribution: Fedora Core 3
Posts: 98

Rep: Reputation: 15
I read about this: The reason that ext2f drives do not fragment as much is that they are CYLINDER based, instead of windows' SECTOR based.

The only way to fragment it is to fill it up to near full capacity (so don't do that)

My theory is that since there are many more cylinders than sectors, they are much smaller. Therefore instead of big huge blocks of information being distributed all through the humongous sector, there are maybe one or two programs contained in a cylinder, much harder to fragment I'd say.

But then again i'm a newbie what do i know? :-P
 
Old 09-14-2004, 06:22 PM   #12
turtle_lover
LQ Newbie
 
Registered: Apr 2004
Location: Tupelo, Ms
Distribution: OpenSuse 10
Posts: 11

Rep: Reputation: 0
I've heard that there are some programs to defrag on linux and it was shown to cause significant performance increase.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
defragment for Linux alaios Linux - General 1 08-16-2005 03:15 AM
how to defragment a linux partition?? fhameed Linux - Software 7 01-07-2005 10:37 AM
No Defragment for Linux weronpc Linux - Newbie 6 02-11-2004 12:21 PM
Defragment in Linux ras123 Linux - General 5 02-05-2004 04:10 PM
filesystem: linux needs to "defragment"? sirpelidor Linux - Newbie 6 01-02-2004 08:15 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 04:30 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration