BTRFS raid1 corrupt leaf error?
All,
I have a RockPro64 NAS with two identical 8 TB drives in a BTRFS raid1 configuration on top of LUKS. I'm running Open Media Vault, aarch64. Every so often, generally once after each rsync backup over the network, I get kernel messages about a corrupt leaf in /dev/mapper/sda-crypt like this: Code:
The filesystem always mounts without errors, so it seems as though any corruption has been fixed. A scrub never finds any errors, but I understand that this error would be invisible to a scrub. I tried different SATA cables and a different SATA adapter, but the errors still show up. Both drives show up as healthy--they have very few power-on hours!--under smartctl -a. I also get messages such as Code:
[Sat Apr 24 13:46:48 2021] BTRFS info (device dm-1): bdev /dev/mapper/sda-crypt errs: wr 274, rd 0, flush 2, corrupt 0, gen 29 I'm going to try another drive power supply next, but only because that's a quick check. I also have another identical new drive and could swap it out and rebuild the RAID, but it may not be the drive. Has anyone seen an error like this before? Any advice? |
I have not seen that, but BTRFS has several superior auto-correction features that might normally correct before the operator would notice unless they go LOOKING for it.
|
All times are GMT -5. The time now is 07:13 PM. |