ZFS: Myths and Misunderstandings (by Jesse Smith)
A very interesting and informative article on ZFS, by Jesse Smith.
The entire article can be found at, http://distrowatch.com/weekly.php?issue=20150420#myth One interesting paragraph: Quote:
And another item of note that can be found further up the same page linked above, Quote:
|
That still doesn't supercede the GPL license, but it does create a vast grey area where a source compiled kernel could be made distributable with ZFS and SPL module support included as external source packages.
To be honest, this is a good development because honestly, if GPL could ever get a special exemption clause just for ZFS and SPL modules, which are CDDL, it might serve the GPL well and end users. The Illumos developers who sponsored the OpenZFS initiative made it adamant that ZFS would be developed as fully open source software despite the fact it was forcibly CDDL licensed thanks in part to Oracle. Maybe this can apply to Slackware as well somehow? Personally, if this could ever allow not just one, but any distribution to have binary built-in SPL and ZFS support, I'd go straight over to ZFS, no questions asked if it was included on the install disk menus. |
Thanks for this. I had read a lot about the recommended use of ECC memory and to have 1GB of RAM per 1TB of storage on the /r/DataHoarder subreddit and it was turning me off of considering zfs in a future build for a storage server. It's good to know those are just myths.
It's also good to know the reason why zfs isn't included in the kernel. If I had done research, I would've likely found it was incompatible licenses, but I just assumed the port wasn't stable enough for inclusion. Quote:
|
Quote:
btw) Overall btrfs in my experience runs smoother on Slackware. A scrub with ZFS of non-system disks had a noticable impact on how responsive the system was; btrfs just churns along leaving the system responsive and in fact finishes far faster (=higher data-throughput); not sure why. |
Quote:
|
Quote:
|
Quote:
|
Quote:
What made me switch was the problem of not being able to shrink pools making it hard to upgrade a raid1 array on a typical home-computer; that in my opinion is the key drawback of zfs, which isn't obvious until you hit it. |
True, but then again, I tend to use zvols under my own defined quotas and limit my usages down similar to how you stack partitions. This way I never have to worry about shrinkage, and I can allocate resources more effectively.
Even with FreeBSD you still have some offline maintenance features not accessible while in the online state, which is why they have a recovery partition embedded in the OS. This way you can boot to the recovery mode, access the shell in a RAMdrive and then scrub the zvol while it's in the offline state. Plus, with the scrub active you shouldn't be doing tasks anyway. Normally my longest scrub times are less than 30 minutes with my largest disk arrays so I'm used to using the recovery mode. One of the reasons you can't shrink a volume with zfs is because of data allocation on the disk itself and the metadata and metadata addressing that serves as the shadow copy for write backs and data recovery. BtrFS does their journaling and metadata addressing using B-trees which operate differently from how ZFS works entirely. It's the same principles, but ZFS uses a completely different tree structure that doesn't allow for shadow copy write back data to be moved around so that if there is a problem with data scans, it can write back from the shadow copies to the proper disk addressing space. Overall, the trade offs are minimal between each, but when it comes to know what does what, ZFS still beats BtrFS hands down. It doesn't have all the nice features, but then again, does it really need them? In the end, I think it would be good for any distribution to have ready to use modules on the installation disk, and tools to allow usage of ZFS, provided we, the end user, have control over building the Linux kernel and choosing whether or not to have our kernel with ZFS as built-in or modularized. |
Quote:
The only thing to keep in mind is that as with every choice it has pros and cons, so it is important to make both of these clear. To me it appears that you run zfs on relatively small arrays as you can finish a scrub in less than 30 minutes. I'm running it on large arrays, currently 6 disks totaling 22TB of space set up in raid1 on a consumer desktop now using btrfs. For that type of usage the lack of the ability to both shrink a pool as well as the lack of the option to rebalance after putting a new bigger disks in became a problem for me as swapping out disks becomes not funny in zfs, while it is easier to do in btrfs. Keep in mind that once you hit a disk nearly full situation in an array the write performance drops through the floor. I can fix that in btrfs, not in zfs without resorting to tricks to force rewriting data. So for my usage pattern at the moment btrfs is the only realistic option until zfs implements the long requested block pointer rewrite. |
It would be better if you chucked the article and played with ZFS for several weeks, preferably alongside btrfs.
The article simply tried too hard to dispel myths, and so the article became suspect. The attempts at proof were over the top. The easy dismissal of licensing issues becomes not so easy when trying to run a kernel with a lot of debug features, or when trying to make a classic kernel that can boot with an in-kernel file system driver to a / partition formatted with that file system, all without the assistance of an initrd. ZFS can run on mortal hardware in non-enterprise workloads. This particular 64-bit FreeBSD PC uses ZFS-on-geli mirrors while using 3 GB (once 2 GB) of RAM. That hasn't been an out-of-memory issue in 2 GB of RAM, haven't tried less for 64-bit PCs. The PC does not use ECC RAM. All has been fine. That's good anecdotal proof. But to mislead people into thinking that ZFS isn't absolutely miserable when starved for memory, no, that isn't good. ZFS tends to bring Linux to a crawl in too little memory and cause FreeBSD to trap. By comparison, btrfs will boot on a minimal 32-bit system using less than 32 MB of memory and will cruise under similar workloads. It's not a show-stopper: If ZFS kills your PC due to memory starvation, add more memory! If it's a small desktop workload, that more memory shouldn't need to bring the total to 8 GB. Some of the higher memory quotes involve deduplication as well: If you use deduplication, maybe you should heed those quotes. I'm not that bright: I simply run ZFS without deduplication until FreeBSD runs out of memory, and it's a cue to buy more memory. So far, the upgrade to 3 GB has been optional. There are misconceptions about ZFS, sure, but the article might not have been the best way to address those misconceptions. |
There's also a memory buffer command you can add during boot sequences (mostly for FreeBSD) that will work if you have less than 4GB of RAM to allow ZFS usages.
4+GB is rather commonplace these days though for many systems, and if you run a 64-bit OS, you should be using 4+GB of RAM anyways. I've never had ZFS lockup or slow down myself on Linux or BSD with any system, including my old 2GB RAM laptop I run OpenIndiana on. There's many misconceptions about ZFS, but yes the license issue is still the biggest part of the problem, however, if the SFLC who say a distribution can use ZFS internally legally without repercussions legally, then the license issue is moot. If the SFLC found a legal loophole in GPLv2 and CDDL, then by all means, they found one nobody knew of or about. |
Quote:
|
Quote:
http://distrowatch.com/weekly.php?issue=20160222#news Quote:
|
I had kernel crashes with 4 GB of RAM, RAIDZ2 4 TB usable. It crashed every time when load went up. Upgraded to 16 GB ECC RAM, never had a crash after that.
|
All times are GMT -5. The time now is 03:45 PM. |