LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   RAID1 is degraded, but shouldn't I still be able to use the array? (https://www.linuxquestions.org/questions/linux-server-73/raid1-is-degraded-but-shouldnt-i-still-be-able-to-use-the-array-4175613165/)

asincero 09-03-2017 10:18 AM

RAID1 is degraded, but shouldn't I still be able to use the array?
 
I have a logical volume group named "vg01". It consists of four RAID1 arrays: md0, md1, md2, and md3. Each array consists of two drives.

Arrays md0 and md2 are both in a degraded state; one drive in each array has failed. Subsequently, they have a "[U_]" in their respective entries in "/proc/mdstat". But, they are still marked as "active" so presumably they are still useable.

However, when I try to activate logical volume "srv" that resides on "vg01", I get two errors saying that it is unable to locate devices with particular UUIDs and then a error saying that it's refusing partial activation of LV vg01/srv. It also says I can specify "--activationmode partial" to override.

I'm going to guess those two UUID messages are corresponding to the two degraded RAID1 arrays. But aren't those arrays active and therefore should appear to the LVM subsystem as healthy drives?

What happens if it do specify "--activationmode partial"? Will I hose the data on "/dev/vg01/srv"?

Does anybody have suggestions on how I can save the data on "/dev/vg01/srv"?

Ztcoracat 09-04-2017 10:22 PM

Hi:

I'm not a Raid expert but I did find a link if you'd like to do a backup.

https://linoxide.com/how-tos/how-to-...tion-on-linux/

Quote:

I get two errors saying that it is unable to locate devices with particular UUIDs and then a error saying that it's refusing partial activation of LV vg01/srv. It also says I can specify "--activationmode partial" to override.
When you can please post the exact error messages. Knowing what they are will help.


Quote:

But aren't those arrays active and therefore should appear to the LVM subsystem as healthy drives?
I'm not sure. Unable to locate devices by UUID's might mean that the HDD can't be read. If the drive or drives are failing the array dies with it. That's what I've learned so far.

I think you will need some kind of Raid Recovery Software. mdadm?

https://www.linuxquestions.org/quest...-drive-925485/
https://serverfault.com/questions/37...-of-its-server

I'm still learning Raid so I'm afraid my help is very limited:-

Ztcoracat 09-04-2017 10:27 PM

https://access.redhat.com/solutions/400173

There may be a way to rebuild a degraded Raid but I'm sorry I don't know how.

IF you are running RH you might want to give them a call.
1-888-733-4281

Hope you have a backup.

jlinkels 09-05-2017 07:10 AM

This is something Which Should Not Happen. Provided you have build your RAID and LVM the standard and recommended way.

You build your RAID array and it is presented as /dev/md0, /dev/md1 to LVM. If there is something wrong with the underlying RAID, LVM simply does not see it. The RAID driver puts a layer over disk failures and the disappearance of partitions like /dev/sda1 etc. LVM does not even know the existence of /dev/sda1, /dev/sdb1. LVM only deals with /dev/md0, /dev/md1 and they still fully exist and unchanged. Therefor you need the /proc/mdstat output to know if everything is still right.

I suspect that somehow you built the LVM not on /dev/md0 but on /dev/sda1 or so. You still can access the disk partitions when a RAID is built on top!

Read this article very carefully: https://wiki.archlinux.org/index.php...e_RAID_and_LVM. And see if what you observe is in line with what is described here.

jlinkels

Ztcoracat 09-05-2017 04:44 PM

Thanks for joining the thread jlinkels.


All times are GMT -5. The time now is 10:11 PM.