Clean install coming up and I'm wondering about file systems
With 14.0 coming and a new machine about to go online I'm looking at doing at least 2 and possibly 3 (my laptop is still 13.0) clean installs. I currently use ext3 on all my machines because it seems I've had fewer problems with data corruption on ext3 files when I've lost power or had to do a hard reboot as opposed to ext4 which are the only 2 fs's I've ever used.
Does this fit with what others have seen in the real world? Are there any file systems that anyone has had better luck with? My primary concern is data integrity rather than performance. |
Well I'm not sure but personally I'd rather choose ext4 and tweak it with options instead. It's just a matter of choice on my side though.
|
What kinds of tweaks? I've never considered tweaking the file system. Can you point me to links?
|
Every detail is mentioned here: kernel.org/doc/Documentation/filesystems/ext4.txt
Ext4 as how I see it without respect to the distro is actually meant to be tweaked as compared to Ext3. I mean, it's better if users would examine first the features that are to be used on a filesystem before implementing it. There should be lots of discussions about what options are best used for Ext4 around the web. Me, I found the options that I had to use in order to make the filesystem have less access to the disk. It was important when I built my system. |
Quote:
|
Quote:
Here are my options: Code:
noatime,data=writeback,nobarrier,commit=100,nouser_xattr,noacl |
For a laptop, I wouldn't go more that "commit=10" the rest are safe enough imho.
|
IIRC the common data loss issues with ext4 was "fixed" in linux 2.6.30, to the point of ext4 having similar data protection levels as ext3, (see ext4/auto_da_alloc in manpage for mount).
Other than that I concur with konsolebox on tweaking ext4 for your own needs. For my partitions with important data (mostly lots of "small" files) I mount with: journal_checksum,nodelalloc,data=ordered,commit=5. While storage/backup partitions which rarely writes mostly "large" files I use journal_checksum,delalloc,data=writeback,commit=300 (and other non-ext4-specific options, such as noatime, noexec and nodev, where applicable). On filesystems where files are rarely larger than a few MB I haven't found delayed allocation to be of any measurable benefit, filefrag reports similar levels of fragmentation, and I can at least not feel any difference in latency. This might be due to the disks internal write caches of about 64 MB, and that I have write cache enabled for all my disks (hdparm -W /dev/sdX). However, I don't know if delayed allocation is beneficial for data safety or not. |
As usually happens when a little knowledge is gained the question changes. It probably makes more sense to look at the options on a case by case basis based on what each of the partitions is actually doing and the data that is present. I used to just mount certain partitions ro and remount them when I needed to write to them, but that became so cumbersome at times that I abandoned it and moved everything to ext3.
Thanks for the replies. I'm now in a much better position to start. |
I have used JFS for many years and have never had data loss. I was thinking to move to btrfs, but their fsck utility is not finished, and I consider it too dangerous to use without it.
|
ext4 all the way. however, it is a mystery to me why options like writeback and nobarrier are even available. A filesystem's top priority has got to be data consistency ON DISK at any point in time. anything else is just madness.
|
@Martinus2u: Well we know that some systems are stable enough and has enough power backups. Considering those for performance is still a wise decision for designing a filesystem.
|
Quote:
This has nothing to do with the average desktop PC or the average server. But you can already tell from the first half of this thread how the wrong message is seeping through to the users/admins of those. |
I forget where I read it (Phoronix?), but btrfs just had a regression in 3.2 or 3.4, and ext4 is still faster than it. Eventually that will change, of course, but for now, I'll be re-installing on my laptop's SSD ext4 with options similar to what e5150 said.
One thing I suggest that I learned when I put in the SSD and installed 13.37 on it is that, ideally, you should MANUALLY format the drive from the command line with a manually formed string of options: that way, you can set everything that needs to be set during format and it gets written only once (obviously important for SSD's). Speaking of SSD's, you can use similar options with the swap partition. Despite what some people have said, I highly recommend a swap partition on an SSD, not only to free memory in case of leaks and such, but also so you can hibernate (which is VERY fast on an SSD, though still not as fast as sleeping, of course). One final thing is that I also use that wild "commit=300"--or even more--since most of the time plenty of memory is available, and it IS a laptop; this also eliminates churning the SSD during heavy rewrites like compiling. Oh, and for ext3 vs ext4, I don't think there's really any reason to use ext3 any more at this point. Meanwhile, I've never used most of those other filesystems, so I can't comment. (I HAVE used murd...er, reiserfs before, but that's long ago...) |
Quote:
I have used XFS for years and never had a problem with data integrity, and it has coped very well after unexpected power interruptions. This article together with the comments section is well worth reading. |
All times are GMT -5. The time now is 04:21 PM. |