This error magically persists across repartitioning and all other disk changes.
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
This error magically persists across repartitioning and all other disk changes.
How do I make this disappear? "Incorrect metadata area header checksum"
I am willing to wipe out all data on the disk. In fact, I have deleted all partitions and started fresh several times, but I still get the error "Incorrect metadata area header checksum" when I run vgscan as soon as any new partition is put on the disk -- even non-LVM partitions.
I suspect using dd to zero out the whole disk might work, but I want a short cut... something faster than waiting a day or two for dd to zero a 1.5 TB disk.
Where is this LVM metadata persisted on the disk that enables it to reappear even after deleting all partitions?
Did you use dmraid at some point? You can use dmraid -E to erase metadata.
I did not use dmraid, but I did partition these disks in a wide variety of ways while I was trouble shooting a boot up problem. I put LVM volume groups and logical volumes on them in a variety of ways.
However, I also used LVM to remove everything (or so I thought).
Well, dd should work, but it is a blunderbuss to kill a mosquito, as they say.
dmraid metadata can also be written by on-board firmware raid controllers. You might have written some without knowing it if you were experimenting or got the disks used.
You could check for dmraid style metadata, (stored at the end of the disk and typically not erased by LVM or repartitioning), with dmraid -r
Well, dd should work, but it is a blunderbuss to kill a mosquito, as they say.
dmraid metadata can also be written by on-board firmware raid controllers. You might have written some without knowing it if you were experimenting or got the disks used.
You could check for dmraid style metadata, (stored at the end of the disk and typically not erased by LVM or repartitioning), with dmraid -r
Thank you.
I'm pretty sure my problem is on sde or sde1, which was indeed part of a raid array on the Areca 1220 controller.
Here's the section from vgscan:
Code:
vgscan /dev/sde: size is 1953525168 sectors
vgscan /dev/sde1: size is 690732 sectors
vgscan /dev/sde1: size is 690732 sectors
vgscan /dev/sde1: lvm2 label detected
vgscan Incorrect metadata area header checksum
vgscan /dev/sdf: size is 1953525168 sectors
vgscan /dev/sdf: size is 1953525168 sectors
vgscan /dev/sdf: No label detected
I am assuming the "Incorrect metadata area header checksum" refers to the entries immediately above it (/dev/sde1).
But this isn't the result I expected here:
Code:
sudo dmraid -r /dev/sde1/
No RAID disks and with names: "/dev/sde1/"
Code:
sudo dmraid -r /dev/sde/
No RAID disks and with names: "/dev/sde/"
And I assume that's why using the -E arg returns an error:
Background on the problematic drive (and a spare):
Drive: sde
Code:
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x000377d3
Partition Boot Start End Size Id System
/dev/sde1 * 63 690,794 690,732 83 Linux
Drive: sdf
Code:
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000
Partition Boot Start End Size Id System
Steps
duplicate the boot partition (and the 62 sectors preceding it) to the other drive (sdf):
Good. No errors. But will it boot? I don't expect a problem because I didn't mess with sde1, which is the boot partition. The MBR of sde is still in place and untouched as well. If I still have problems, I'll report back.
Let me update by saying that yesterday I started as I had before and put new partitions on the disks, created new LVM physical volumes, groups and logical volumes, make encrypted volumes in those LVs, and created file systems and formatted them.
After doing all that (and at stages in between), I ran all the diagnostics I ran before. Now I am not getting any of the errors I saw previously. So I think that pretty much concludes this issue.
Previously, any time I deleted the partitions and then recreated them, the errors came back. I'm not seeing that now. I'm happy
Any idea of what was different between before & now?
Yes, this whole long thread could be boiled down to this:
Code:
sudo pvremove /dev/sdf1
I learned that removing all the partitions and even using
Code:
dd if=/dev/zero of=/dev/sdX bs=1k count=1
is not enough to remove the LVM metadata.
I do not know exactly where LVM stores this metadata, but it apparently isn't in the first 1k of the disk.
Most of what I did in this thread ended up being useful for me to learn and to verify my results, but in the future, if I get that error under similar circumstances, I will simply use
Code:
sudo pvremove /dev/sdX
Hopefully, this thread serves a useful supplement to the information that Google usually serves up when one enters the error message "Incorrect metadata area header checksum".
The info I found in sources such as the LVM How-To did not seem to apply to my situation (although in hindsight I can see the relationship). But I wasn't trying to recover physical volume metadata, I was trying to get rid of it.
EDIT: and the reason it was so important to me to remove this error message is that I am building a file server. I reasoned that if I couldn't start with an error-free storage state, it certainly did not make sense to begin putting all my important data on this server. So I wanted to understand and remove any and all errors I was seeing before putting this server to use.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.