Dell R610 hardware raid 1 configuration
This is a long story, so please bear with me - I am trying to give as much information as possible. I have a Dell R610 and an MD1000 powervault, which is to be used entirely for data storage. The R610 arrived first, so I thought I'd set about installing the OS, CENTOS 5.3. The machine is equipped with twin 146 GB disks. In the PERC configuration, I set it to RAID 1, and then in the CENTOS setup I manually repartitioned the disk with separate partitions for /var, /tmp, /xlog, /boot, /user and / . Install proceeded without a hitch and the machine worked beautifully. Next, the Powervault arrived and I set about configuring it. We have three disks in RAID 0, and four disks in RAID 5 and a hotspare for the RAID 5 partition. After having setup the RAID information in the PERC, I went on to set up the Linux partitions on the LVM, which also went without incident. But I was surprised to see the LVM reporting that there were unitialised partitions on /dev/sde. Further investigations showed that /dev/sde5 is the bootable partition on the R610. The uninitialised partitions are reported as sde1-sde4, and they contain the original partitions as shipped by Dell.
I confess that I was puzzled as to why the bootable device was now a partition on /dev/sde. Of course, I didn't check which /dev device it was before, so maybe it was that all the time, but I'd be surprised.
All seemed to work, however, and when the machine is running I can see the activity lights on both the R610 disks coming on in sync most of the time - I assumed that this is during write processes to mirror the disks as planned. I also assumed that when the disk 0 was lit alone, it was reading and that when disk 1 was lit alone it was somehow having to catch up with disk 0.
This morning I decided to test the mirroring and I deliberately pulled disk 0, and I got a message that a disk was missing and that I could press any key to continue, which I did. I was relieved that the machine came up without a hitch. So I powered the machine off, put disk 0 back in and removed disk 1 and then powered On again. The OS did not boot, and got I messages saying that I could retry the boot, import a foreign configuration and other things that I didn't think were a good idea. I powered the machine off again and replaced disk 0, but on reboot I got all the same trouble - it wouldn't boot and it would try to boot from the NIC. I checked the boot sequence, which was as it should be - CDROM, hard disk, NIC. Finally, I examined the BIOS boot device and found that somehow the boot RAID controller had switched from the embedded controller to the additional on for the SAS powervault. I changed those around and retried - the machine booted without any problem, but the error light on disk 0 flashed constantly while that on disk 1 was on solid. The activity lights on both disks were also flashing very rapidly. However, eventually they went off and the error light on disk 0 came on solid.
After all that I thought I'd have a look at the LVM and I see that /dev/sde is still reporting uninitialised partitions. But /dev/sde5 is also reported in the volume group as the boot device with my original partitions. I have obviously not done everything I need to do to set up mirroring, but I can't figure out what it is - can someone please help me? I'd be very grateful!