Raid 1 Recovery after a drive failed...
Ok here is the situation. I have lots of Windows experience, but not very much Linux experience, but I am learning. I have a 4 month old Dell Precision 370 PC with a 64-bit Intel Pentium 4 3.4 GHz processor, 3.19 GB RAM, and 2 SATA drives in Raid 1running as a sever with suSE 9.3 installed. During the normal course of a very quiet day the system locked. I ran a BIOS level hard drive diagnostic after an unclean shutdown and it told me that, "Drive 0 = WD2500JD-75HBB0 - fail. Return code: 7." The other hard drive passed. So I go through the trouble of talking to Dell and getting a new hard drive. (Dell tech support was absolutely no help, they would not even tell me what a "fail. Return code: 7" means. If anyone knows, please fill me in.) The Dell tech installs the drive an leaves.
The problem is I partitioned the new drive 0 the same way that drive 1 is partitioned and when attempting to boot from the drives it goes from BIOS to a black screen with a cursor blinking in the upper left corner. I can/have used the installation DVD to boot the system into a less than normal but somewhat functioning state. I was able to run mdadm to reconstruct the raid 1. It seemed to like it and when looking at the output from --examine, --details, & --query it looks like the raid has been rebuilt and is functioning (I would post the actual output, but the servers USB is not functioning nor is the NIC). But even after doing this it goes to a black screen with a blinking cursor after the BIOS screen when booted. I am pretty sure that the boot loader is missing or there is some other problem with the MBR, I am just not knowledgeable enough to know. I would like to know how I can find out if the MBR/Boot loader is missing/corrupt/broken & where do I put them if they are missing/corrupt/broken in a raid 1 setup.
Thank you for any help anyone gives me.
|