This error magically persists across repartitioning and all other disk changes.
How do I make this disappear?
"Incorrect metadata area header checksum" I am willing to wipe out all data on the disk. In fact, I have deleted all partitions and started fresh several times, but I still get the error "Incorrect metadata area header checksum" when I run vgscan as soon as any new partition is put on the disk -- even non-LVM partitions. I suspect using dd to zero out the whole disk might work, but I want a short cut... something faster than waiting a day or two for dd to zero a 1.5 TB disk. Where is this LVM metadata persisted on the disk that enables it to reappear even after deleting all partitions? |
Did you use dmraid at some point? You can use dmraid -E to erase metadata.
|
Quote:
However, I also used LVM to remove everything (or so I thought). I just do this: Code:
dd if=/dev/zero of=/dev/sdX bs=446 count=1 |
Well, dd should work, but it is a blunderbuss to kill a mosquito, as they say.
dmraid metadata can also be written by on-board firmware raid controllers. You might have written some without knowing it if you were experimenting or got the disks used. You could check for dmraid style metadata, (stored at the end of the disk and typically not erased by LVM or repartitioning), with dmraid -r |
Quote:
I'm pretty sure my problem is on sde or sde1, which was indeed part of a raid array on the Areca 1220 controller. Here's the section from vgscan: Code:
vgscan /dev/sde: size is 1953525168 sectors But this isn't the result I expected here: Code:
sudo dmraid -r /dev/sde1/ Code:
sudo dmraid -r /dev/sde/ Code:
sudo dmraid -E /dev/sde1/ UPDATE: maybe this helps: Code:
o dmraid -r |
Detailed update:
Background on the problematic drive (and a spare): Drive: sde Code:
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes Code:
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes duplicate the boot partition (and the 62 sectors preceding it) to the other drive (sdf): Code:
$ sudo dd bs=512 if=/dev/sde of=/dev/sdf count=690794 Code:
vgscan /dev/sde: size is 1953525168 sectors Code:
$ sudo tune2fs -U c1b9d5a2-f162-11cf-9ece-0020afc76f16 /dev/sdf1 Wanted to test this: Code:
$ sudo dd bs=62 if=/dev/zero of=/dev/sdf count=1 Test pvremove on the backup drive: Code:
$ sudo pvremove /dev/sdf1 Do it for real. Code:
$ sudo pvremove /dev/sde1 Code:
vgscan /dev/sde: size is 1953525168 sectors FYI => Grub0.97 is installed in the MBR of /dev/sde and looks on the same drive in partition #1 for /grub/stage2 and /grub/menu.lst. But there are a few new problems: Quote:
Code:
$ sudo dd if=/dev/zero of=/dev/sdf bs=446 count=1 Code:
$ sudo dd if=/dev/zero of=/dev/sdf2 bs=1kB count=1 Now, how to fix sde2? Code:
$ sudo dd if=/dev/zero of=/dev/sde2 bs=1kB count=1 Code:
$ sudo bash boot_info_script27.sh |
Very interesting, I'm glad you got it fixed, I'll try to remember this one the next time someone asks about a "metadata" error.
|
Quote:
TIA |
Let me update by saying that yesterday I started as I had before and put new partitions on the disks, created new LVM physical volumes, groups and logical volumes, make encrypted volumes in those LVs, and created file systems and formatted them.
After doing all that (and at stages in between), I ran all the diagnostics I ran before. Now I am not getting any of the errors I saw previously. So I think that pretty much concludes this issue. Previously, any time I deleted the partitions and then recreated them, the errors came back. I'm not seeing that now. I'm happy :) |
Any idea of what was different between before & now?
|
Quote:
Code:
sudo pvremove /dev/sdf1 Code:
dd if=/dev/zero of=/dev/sdX bs=1k count=1 I do not know exactly where LVM stores this metadata, but it apparently isn't in the first 1k of the disk. Most of what I did in this thread ended up being useful for me to learn and to verify my results, but in the future, if I get that error under similar circumstances, I will simply use Code:
sudo pvremove /dev/sdX The info I found in sources such as the LVM How-To did not seem to apply to my situation (although in hindsight I can see the relationship). But I wasn't trying to recover physical volume metadata, I was trying to get rid of it. :) EDIT: and the reason it was so important to me to remove this error message is that I am building a file server. I reasoned that if I couldn't start with an error-free storage state, it certainly did not make sense to begin putting all my important data on this server. So I wanted to understand and remove any and all errors I was seeing before putting this server to use. |
All times are GMT -5. The time now is 05:13 PM. |