First, my setup:
-- CentOS 6.4 with latest updates
-- Two 3GB SATA drives sda and sdb
-- Created 3 RAID1 partitions:
---- md0: /root, with GRUB setup on both drives per
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
---- md1: /, swap, /tmp partitions using LVM
---- md2: Encrypted LVM volume using LUKS
After the setup is done, I tested the RAID by pulling one drive and boot with the other, it works. Then I followed steps in
http://forums.fedoraforum.org/showpo...10&postcount=6 to prevent the OS from asking me about passphrase during boot and wrote two scripts to open and close luks, the script works too.
Now the problem: yesterday, I proceed to setup samba on the server, and share the encrypted mount, this works too. I also run hdparm, vgdisplay, etc to collect some information on the final system, and I think I forget to run the close luks script (which umount and run cryptsetup luksClose) before powering off.
Today when I started the machine, all hell has broken loose, the RAID1 will only start in degraded mode, and it randomly pick one of the RAID partition from sda or sdb, here's an example result of "cat /proc/mdstat":
Code:
Personalities : [raid1]
md0 : active raid1 sda1[0]
262132 blocks super 1.0 [2/1] [U_]
md2 : active raid1 sda3[0]
2896447356 blocks super 1.1 [2/1] [U_]
bitmap: 2/22 pages [8KB], 65536KB chunk
md1 : active raid1 sdb2[1]
33553340 blocks super 1.1 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
Run smartctl against the two drives seems to show they're healthy, and there're no obvious errors in /var/log/messages. I also tried reversing the changes made for luks (basically uncomment the two lines in /etc/crypttab and /etc/fstab), but it didn't work. So the questions are:
1. What is the reason for this issue? Is it caused by not umount and running cryptsetup luksClose on the encrypted LVM volume?
2. How do I fix it?
Thanks