LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Help with degraded RAID1 (https://www.linuxquestions.org/questions/linux-server-73/help-with-degraded-raid1-4175455445/)

jimjxr 03-25-2013 12:16 AM

Help with degraded RAID1
 
First, my setup:
-- CentOS 6.4 with latest updates
-- Two 3GB SATA drives sda and sdb
-- Created 3 RAID1 partitions:
---- md0: /root, with GRUB setup on both drives per http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
---- md1: /, swap, /tmp partitions using LVM
---- md2: Encrypted LVM volume using LUKS

After the setup is done, I tested the RAID by pulling one drive and boot with the other, it works. Then I followed steps in http://forums.fedoraforum.org/showpo...10&postcount=6 to prevent the OS from asking me about passphrase during boot and wrote two scripts to open and close luks, the script works too.

Now the problem: yesterday, I proceed to setup samba on the server, and share the encrypted mount, this works too. I also run hdparm, vgdisplay, etc to collect some information on the final system, and I think I forget to run the close luks script (which umount and run cryptsetup luksClose) before powering off.

Today when I started the machine, all hell has broken loose, the RAID1 will only start in degraded mode, and it randomly pick one of the RAID partition from sda or sdb, here's an example result of "cat /proc/mdstat":
Code:

Personalities : [raid1]
md0 : active raid1 sda1[0]
      262132 blocks super 1.0 [2/1] [U_]

md2 : active raid1 sda3[0]
      2896447356 blocks super 1.1 [2/1] [U_]
      bitmap: 2/22 pages [8KB], 65536KB chunk

md1 : active raid1 sdb2[1]
      33553340 blocks super 1.1 [2/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Run smartctl against the two drives seems to show they're healthy, and there're no obvious errors in /var/log/messages. I also tried reversing the changes made for luks (basically uncomment the two lines in /etc/crypttab and /etc/fstab), but it didn't work. So the questions are:
1. What is the reason for this issue? Is it caused by not umount and running cryptsetup luksClose on the encrypted LVM volume?
2. How do I fix it?

Thanks

vishesh 03-25-2013 12:10 PM

This does not seems encryption issue ? Does the problem only with RAID1 or with anything else ?

Thanks

whizje 03-25-2013 02:28 PM

Code:

mdadm --add /dev/md0 /dev/sdb1
Should fix the issue and likewise for md1 and md2.

jimjxr 03-26-2013 11:44 AM

Quote:

Originally Posted by whizje (Post 4918605)
Code:

mdadm --add /dev/md0 /dev/sdb1
Should fix the issue and likewise for md1 and md2.

Ok, thanks, this did fix the issue, although I'm still confused as to why this happened in the first place. The array shouldn't just degrade without reason, so what could be the reason for this?


All times are GMT -5. The time now is 06:55 AM.