Originally Posted by archangel_617b
We currently have all our volumes partitioned with ext3 and I'm wondering if there's a better option. We have volumes from 500GB to 2TB and recently had a power event. The 2TB volumes took over an hour to fsck.
Is there a better way? I'm not sure there is. These volumes are all basically re-used as file shares. Lot's of reads, writes, large files, small files... Normal operations are fine, we don't really need to optimize for databases or anything.
We're basically all RedHat/CentOS with every release known to man... But I will be migrating to RHEL (5).
Not that it will be easy to migrate, but my suggestion is to definitely look at XFS. But, you don't really seem to have a specific "type" of file, not "mostly large" or "mostly small" so maybe ext3 would be fine; as has been noted, it shouldn't have been necessary (you may just have it setup that way, check /etc/fstab) since the journal should have recovered unless something really bad happened.
XFS is nice when working with large files, it deletes fast, has always (knock on something wood-ish) been very stable for me and easy to recover. The only time I had a "gotcha" was when I setup an LVM arrangement and XFS didn't work well with resizing the volume (shrinking if I remember right). So as long as you don't anticipate shrinking your disks, XFS is really nice.