You don't experience performance hits on Linux from fragmentation until the disk is around 80% filled.
Much as many would love to argue... Linux filesystems are just as much vulnerable to fragmentation as Windows filesystems. I recently had a discussion about this (within the past few years).
In the discussion users list sources with their arguments.
Most Linux filesystems avoid file fragmentation quite well as long as a partition stays relatively empty. This is achieved by reserving space for files, using delayed allocation, etc. However, these techniques fragment free space very quickly. Fragmented free space means that filesystem runs out of contiguous space to write new files, so newly written files become fragmented. So, performance stays high as long as partition is relatively empty.
For these reasons, I think we really need a good optimization program for Linux, which will not only defragment files but will consolidate free space into as few chunks as possible.
is referenced within that post for being able to display fragmentation on a Linux file system. I have never used it (nor needed to) so I can't attest for it's validity. Other users on that Gentoo forum claim it works though.
Of course the need for defragmenting, contiguous data files, and disk optimization goes away when you use a modern SSD (solid state disk) drive which has the same read time no matter which sector is being accessed. Due to this Linux still stays in spiffy shape above 80% disk usage on solid state. There's benchmarks around, just google (see c300 benchmarks).