This is something Which Should Not Happen. Provided you have build your RAID and LVM the standard and recommended way.
You build your RAID array and it is presented as /dev/md0, /dev/md1 to LVM. If there is something wrong with the underlying RAID, LVM simply does not see it. The RAID driver puts a layer over disk failures and the disappearance of partitions like /dev/sda1 etc. LVM does not even know the existence of /dev/sda1, /dev/sdb1. LVM only deals with /dev/md0, /dev/md1 and they still fully exist and unchanged. Therefor you need the /proc/mdstat output to know if everything is still right.
I suspect that somehow you built the LVM not on /dev/md0 but on /dev/sda1 or so. You still can access the disk partitions when a RAID is built on top!
Read this article very carefully:
https://wiki.archlinux.org/index.php...e_RAID_and_LVM. And see if what you observe is in line with what is described here.
jlinkels