Problem creating new mdadm raid 1 array
hello all;
Background: I have a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed. At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap). Problem: I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup. Code:
[root@kilchis etc]# fdisk -l Code:
[root@kilchis sysconfig]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 Code:
[root@kilchis sysconfig]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 missing Any help is much appreciated!! |
Instructions here. It was the first hit when I Google'd.
|
Been down that road... it fails
Quote:
good link though, I will keep it in mind. I have not gotten to the point where I need to setup grub. thanks |
I don't think you understand the sequence of actions required, as described at the link I gave you.
You have to: - Create the array on the single unused drive - migrate the data from the existing system drive to the single-drive array - Configure the array to boot - Reboot onto the array - Add the original system drive to the array |
You can't create a RAID array out of two disks which happen to be equal. In addition, I don't know what happened when you tried to put the disk back on the Intel RAID. It is either hardware RAID, BIOS (fake) RAID or mdadm RAID.
BIOS RAID usually doesn't work with Linux (but you'll discover that only when you loose a disk or try to do something with the array). If you want to switch to mdadm RAID, disassemble the system down to a single disk and recreate the array. The sequence is like this: 1. Get a working system on one disk (the primary). 2. Clean the secondary disk, partition it. 3. Create a degraded array on the secondary disk 4. Copy the working system from primary on secondary 5. Clean the primary, and add it to the degraded array as to make the array complete. An excellent guide: http://www200.pair.com/mecham/raid/r...aded-etch.html This is not debian specific. jlinkels |
Then I have questions????/
Quote:
If that is how it works, then your are right I did not understand the sequence that it requires. As a side note that would be something I would think would be in the man pages or Linux Documents. (at least I didn't read it there) thanks |
You have to copy the data from the non-RAIDED partition to the RAIDed partition because you don't have RAID yet. After that and you completed the array data will copy automatically yes.
It is in the Linux documentations. macemoneta and I gave you the links. jlinkels |
Still having problems with booting
Okay, so I followed the procedure in the link. I only have one partition so it is easy to work with. When I reboot I get the following error:
Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,0) I added the grub entries just as they are in the link. And then copied them to the working drive. Anyone have any ideas on this? menu.lst - boot lines Code:
(original boot line pointedt to root=/LABEL1 /) |
Just to be clear, you set your BIOS to boot the new RAID drive, and the grub configuration is the same on both drives? Which grub entry did you boot?
|
Raid boot problem
Well after checking the BIOS, it is setup to boot the first disk.
The grub configuration is setup to boot the first (1) option in grub which would point to both disks in the raid. I have tried to boot each of the entries and they all fail. When the BIOS is pointing to the first disk. ------- (next day) Changed the BIOS so that it is looking at the second disk as the boot disk. Selected the grub entry (1) for both disks in the raid. It gets further, but still fails looking for /dev/hda. Setting up Logical Volume Management: /dev/hda: open failed: no medium found Checking Filesystems fsck.ext3: Invalid argument while trying to open /dev/md0 kernel direct mapping tables up to 100000000 @ 10000-15000 [Failed] ----- I get the same error when booting the original grub entry pointing to the first disk. ----- When I try to boot entry (2) pointing to /dev/sdb1 of the raid it fails with a Kernel crash ----- |
You want the BIOS to boot the RAID drive. Since the RAID array is operating degraded, you want to use the second grub menu entry. Details on the kernel issue would help.
You mentioned earlier that you only had a single partition on the drive. That means that /boot is just a regular directory. Are you sure that your BIOS doesn't have a limitation on addressing? If you suspect that it does, you need a separate /boot partition at the beginning of the drive. |
more info
Quote:
Yes, I have a single partition that includes /boot as a directory. Limitations on addressing, not sure, how would you find this info? Hopefully this will not be the case as I am trying to duplicate my working file server. Being as this is a test system, I am going to scrub it and start over. I will add to the thread when I get back to this point or at success. |
rebuilt system and reconfigured raid - still no joy
I have rebuilt the system and did a copy/paste after each of the commands. Saved them on another system. When I reboot the system (BIOS using second drive) and grub selecting the first entry for mirroring the system fails to boot:
Kernel panic - not syncing VFS: Unable to mount root fs on unknown -block(9,0) But booting from the original entry for grub that points to root=LABEL=/1 boots and shows that I am using /dev/md0. When I try to add /dev/sda1 ( which is the original drive ) I get: mdadm --add /dev/md0 /dev/sda1 mdadm: Cannot open /dev/sda1: Device or resource busy swapon -s shows that both /dev/sda2 and /dev/sdb2 are being used correctly Not sure where to go from here. here is the copy/paste of what I have done: Code:
[root@kilchis /]# sfdisk -l /dev/sda |
All times are GMT -5. The time now is 07:39 AM. |