SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Raid 5/6 is playing with fire, there are currently problems with parity, disk replacement etc etc
Either way, this is still a filesystem in development so backups are a must
( If you don't value your data enough to back it up then why are you storing garbage? )
btrfs-convert is part of the btrfs progs package so you most probably have it installed already.
Btrfs does some things very differently from ext4 so reading the docs is a must before you start and should you encounter filesystem errors later then looking through the mailing list or popping onto the irc channel before attempting to fix yourself is strongly advised.
Last edited by OldHolborn; 08-03-2016 at 03:16 PM.
I'll point out that LVM allows you to create a snapshot volume that freezes the view of a logical volume in time. It is designed for creating backups; it is not designed for being a backup. If you want to use btrfs in order to create sane backups and nothing else, then consider using LVM instead. Many of the other use cases mentioned in the thread would not be supported by LVM (RAID being an exception, but I have not personally tested that). Migrating data from one hard drive to another while the system is operating is a breeze with LVM; I don't know if btrfs can help with that or not.
Yup. Too many bugs still. And using it for /home? not a chance. Did for a while (raid 1), but I got off of it after a month due to random hangs.
btrfs is a very complex filesystem that (IMO) is a bit TOO complex. With all the checksums and error detection - it REALLY REALLY should detect single disk errors (and rebuild them if the disk isn't flagged bad). It doesn't. It also doesn't do NFS very well either.
Man I really just don't know if BTRFS is for me. I don't care much about what my file system is. I make music and that's the only thing I need backed up, which I do several times. However my file system as a whole doesn't need to be overly complex. If the main difference is convenient backups then I might as well not even bother.
Also....too complex? Looks like someone else is missing the Unix philosophy as well.
Last edited by bulletfreak; 08-03-2016 at 07:06 PM.
OK, something that btrfs can do that ext4 cannot -
detect and warn you of bitrot, your southbridge/sata-cable/harddrive decides to flip a bit in the data of one of your files.
btrfs in single mode will tell you so, ext4 will let you load it unwarned.
add raid1 and you get the recovery from that bitflip.
One really neat feature I have on my list to play with, btrfs-send, this sends just the changes between two snapshots
Imagine having a main big array in one place and its backup elsewhere, you send the difference between two snapshots to a small drive which you then carry to your offline location where you can then bring your backup to the same state as your main array
Like I said, my photos are on btrfs RAID5 - consistency is as necessary as backup for me.
I don't want mdadm+LVM+filesystem - all points of potential failure, feature mismatch. zfs showed the way ages ago. When btrfs popped up (licensed appropriately) I jumped. RAID5 is a more recent addition.
Snaps are a god-send - I snap before I start gimp, and if it all winds up looking like crap, I toss what I'm working on and go back to the snap. Instant restore. And yes, differential snaps as well.
Isn't for everybody, but no way anyone can convince me it isn't what I need.
Pretty much the title. Also what packages do I need exactly? I can't find the BTRFS-Convert package or any of the other ones, just the BTRFS-Progs.
Depends on how much you value the data on that filesystem. I wouldn't dream of moving my data to btrfs, but I'd happily run the OS on it. At the end of the day it boils down to trust. I have some important files on my computer which are 24 years old. They've been transferred from FAT to FAT16 to FAT32 to NTFS to ReiserFS to ext3 to XFS to JFS to FFS filesystems along the way, without a hiccup. Most of these filesystems gained their users' trust quite quickly. I remember reading once that ZFS was stable and trustworthy after just two years. I also remember reading that ZFS was the work of just two men.
Can you say the same about about Btrfs, after all these years? I certainly can't.
Some people here swear by it, but there are recent cases where users have abandoned it because it just wasn't reliable.
To my mind data trumps everything. You can lose your operating system, but if your filesystem corrupts your data (and from there, your backups) chances are you will be shedding tears. I don't intend to take that chance with Btrfs.
The BSDs have trustworthy filesystems of their own (ext2 was based on a BSD FS). Microsoft has a trustworthy filesystem of its own. With the pool of talent supposedly available to Linux, why are we still waiting, decades later, for Linux to catch up, and produce its own longterm, resilient and trustworthy filesystem that is not considered a stopgap, as ext is? Do we really have to wait another 9 years for Btrfs to become that FS?
To my mind the reliability and trustworthiness of a filesystem are critical beyond all other metrics. It speaks volumes about the state of Linux that it still hasn't got that filesystem.
(I am currently using a mix of XFS, JFS and NetBSD's FFS for my data, combined with dar catalogs, rar recovery records and rsnapshot for backups.)
Man I really just don't know if BTRFS is for me. I don't care much about what my file system is. I make music and that's the only thing I need backed up, which I do several times. However my file system as a whole doesn't need to be overly complex. If the main difference is convenient backups then I might as well not even bother.
Also....too complex? Looks like someone else is missing the Unix philosophy as well.
I like the cutting edge stuff... it's "much more... muchier..." philosophy is like the weather.
Like I said, my photos are on btrfs RAID5 - consistency is as necessary as backup for me.
I don't want mdadm+LVM+filesystem - all points of potential failure, feature mismatch. zfs showed the way ages ago. When btrfs popped up (licensed appropriately) I jumped. RAID5 is a more recent addition.
Snaps are a god-send - I snap before I start gimp, and if it all winds up looking like crap, I toss what I'm working on and go back to the snap. Instant restore. And yes, differential snaps as well.
Isn't for everybody, but no way anyone can convince me it isn't what I need.
That's funny, because all of my systems (well, not my laptops since they have only one drive) are on mdadm+LVM+<various_filesystem_types>. My laptops are on LVM+<various_filesystem_types>.
I don't have your use-case because all of my gimp stuff looks like crap anyways. No need to snapshot. The stuff that I need to version control in some way is under git.
All that being said, since you're happy with your setup and it fits the way you work, great! Raises a beer at the screen
I gave up on btrfs after fruitlessly trying to get some useful information out of the man page.
I'd like to call it poorly documented, at least in 14.1. I use zfs (sbo) or ext4 on raid for storage.
I use ext2 for my boot partition, jfs for my /(root), and zfs on my /home. I used to use btrfs a lot but it often has issues with various kernels from time to time and isn't as stable as I'd like it to be.
ZFS is stable to me, far much more stable than btrfs ever will be. ZFS works, plain and simple.
BTRFS is quite stable for simple scenarios but is still "unfinished" and can lead to trouble.
I use BTRFS for a long time and i love its features and it has been rock solid in my case. However, i get angry when i read many guides on the internet praising it so highly and labeling it "stable" and irresponsibly urging people to try it.
It is in a good state now but not perfect. It has grown past the stage that it works correctly, then out of the blue, it suddenly gets corrupted. It has also grown past the stage that it gets corrupted after a simple power failure. It still has problems though and is not for everyone. While i cannot dream changing btrfs and returning to LVM+other fs, i wouldn't call it "stable" yet nor blindly suggesting it to everybody.
When you read in a guide, "btrfs is stable", what they probably mean is that "btrfs is stable under the following circumstances:"
a) You use it on a laptop, or on a desktop with UPS, so that you can be sure there will be no power failures (although it is not so picky about power failures now)
b) you use it on one disk and not raidX (even with the simple scenario of RAID1 that normally is super stable, there were some bugs recently)
c) You never fill it above 60-70% because the algorithms are not optimized yet and can give out of space errors even with lots of free space (i read there is a new algorithm in 4.8 kernel)
d) You are always using the latest kernel and not distributions kernels 36 versions behind (there are people reporting bugs in the mailing list that still have 3.X kernels)
e) You don't have many snapshots (a rough guide is to have a max of 300-500 subvolumes and 1000-1500 snapshots because with too many of them there were problems)
f) You don't use quota because the implementation was not so good and it led to troubles with balance and stuff.
g) You don't use btrfs send because it resulted in corruption and loss of the fs in some cases.
h) You don't use convert from extX because it is broken and leads to all kinds of problems.
i) Something else i forget
There are people, of course, that use many of the above features and never had a problem. I myself use it without UPS and had power failures and have a couple of hundred snapshots and never had a corruption. Statistically though from the experiences of people in the mailing list, you need to observe the above to be "sure" that you do not have corruption.
A relatively experienced user that is willing to compile the latest kernel and to subscribe to the mailing list and follow it, will not have problems with BTRFS. It is not, however, a fire and forget FS like extX/XFS are.
The biggest problem with BTRFS is that is still moving and thus you cannot be sure that N+1 version of kernel is better than N version. There was a bug recently that was triggered (i think on 4.6 kernel or was it 4.7) that did not exist in older versions.
OK, something that btrfs can do that ext4 cannot -
detect and warn you of bitrot, your southbridge/sata-cable/harddrive decides to flip a bit in the data of one of your files.
btrfs in single mode will tell you so, ext4 will let you load it unwarned.
add raid1 and you get the recovery from that bitflip.
I'm not sure why most people ignore the bitrot problem which is what the mdmadm+LVM solution lacks without tricks. This for me is one of the main reasons to use zfs or btrfs. For modern large harddrives, with corresponding small physical bitsizes this is a real problem. This is silent and without running checksums on the whole file-system one runs the risks of files corrupting, which then propagates when backups are done (assuming the ability to keep N copies of the data for a long time is finite for most users).
For that reason I've been using ZFS and now BTRFS for a long time and running the scrubs does pick up csum errors on several TB of data on disk. CERN has done a pretty detailed study on this btw.
I'm now using BTRFS as ZFS can't really cope with multiple sized disks in a conventional desktop as resizing in ZFS isn't as flexible and more targeted to business storage. I've been running BTRFS raid1 for years and recently switched to raid5 at some point in the 4.0 kernel cycle as I was running out of SATA ports even with extenders. I agree with others that raid5 still needs work, mainly because running the scrub is way to slow for raid5 (running about 10x slower than the raid1 on the same system).
To summarize: for me btrfs has worked well to date, but I don't use all the snapshot like advanced features. All I'm really interested in is the improved protection against bitrot
OK, something that btrfs can do that ext4 cannot -
detect and warn you of bitrot, your southbridge/sata-cable/harddrive decides to flip a bit in the data of one of your files.
At least it is supposed to. The problem that remains is that besides telling you, in a raid 5 it should be able to rebuild it when that happens... it doesn't.
Quote:
btrfs in single mode will tell you so, ext4 will let you load it unwarned.
Which is the same as btrfs.
Quote:
add raid1 and you get the recovery from that bitflip.
It should. My tests show that it doesn't.
Quote:
One really neat feature I have on my list to play with, btrfs-send, this sends just the changes between two snapshots
Imagine having a main big array in one place and its backup elsewhere, you send the difference between two snapshots to a small drive which you then carry to your offline location where you can then bring your backup to the same state as your main array
Like I said, it has lots of promise... but doesn't yet deliver.
At least it is supposed to. The problem that remains is that besides telling you, in a raid 5 it should be able to rebuild it when that happens... it doesn't.
Not sure I have the same experience here; as far as I can tell running raid5 fixes csum errors correctly IF it can read the corresponding file on the devices. My impression is that it struggles when hitting bad-blocks on a drive, i.e. I also get errors that can't be fixed occasionally always associated with problems of reading a sector/block when looking at logs. Raid1 worked fine for me.
How recent is the kernel + btrfs-tools you have tested this with?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.