A disk with a partition which was an md underlying device in another system was fitted. After booting, the foreign md was listed in /proc/mdstat but the foreign VG was not listed by vgs.
The vgchange -ay run during boot was expected to have discovered the VG.
Running vgchange -ay changed nothing. Trying to make the md a PV, both implicitly using vgcreate and explicitly using pvcreate failed. Here's the last run:
Code:
root@vmhost:~# pvcreate -vv /dev/md127
DEGRADED MODE. Incomplete RAID LVs will be processed.
...
/dev/md127: size is 0 sectors
...
devices/filter not found in config file: no regex filter installed
What to try next?
The gory details
The Debian 8 Jessie 64-bit server was running normally.
It had been installed with EFI boot disabled in the BIOS but the BIOS did not detect the HDDs until EFI BIOS was enabled (we have this situation on at least two Intel motherboard servers).
There were two 1 TB HDDs, formatted GPT. The first partitions were used for md0 which provided /boot. The second were used for md1 which provided the LVM PV.
One of the HDDs generated a few SMART errors. It was removed but the manufacturer's test software found no defects. It was refitted (that was probably a mistake. When one of a mirrored pair starts going bad but not in ways that make md remove it from the array, it may cause file system corruption. Your views?). The SMART errors increased. It was removed. The server would not boot, reporting it could not find the LV for the root file system. Puzzlement.
The defective HDD was refitted. The sever booted. The md devices were using only partitions from the defective HDD. Perhaps md had detected discrepancies and dropped the wrong underlying devices from the array.
The partitions from the good HDD were manually added back to the array. When synchronisation finished, the partitions from the defective HDD were removed from the array.
The replacement 1 TB HDD arrived, was fitted, partitioned and added to the md arrays. Trying to install GRUB generated multiple error messages including "/usr/sbin/grub-probe: error: disk `mduuid/af598d16317f889bd7975cb97d51ad19' not found". Using GNU parted and running "set 1 bios_grub on" on both sda and sdb allowed GRUB to install normally. IDK what "set 1 bios_grub on" does (!) but it made the server bootable without the defective HDD.
The defective HDD was removed and RMAed.
A temporary replacement 3 TB HDD was fitted and its partitions added to the md devices.
The replacement HDD arrived and was fitted but the server would not boot, complaining again about not being able to find the root LV. Puzzlement.
In an effort to clean up this mess, the OS was installed to the replacement HDD as the only HDD connected, this time in EFI mode.
The next step is to recover the data LVs from the old disks. The temporary 3 TB HDD was fitted and ... we are back to the beginning of this story.