LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Debian testing: RAID 1 with 2 disks starts degraded after each reboot from 3rd disk. (https://www.linuxquestions.org/questions/linux-newbie-8/debian-testing-raid-1-with-2-disks-starts-degraded-after-each-reboot-from-3rd-disk-815249/)

kalujny 06-20-2010 05:06 AM

Debian testing: RAID 1 with 2 disks starts degraded after each reboot from 3rd disk.
 
Hello All,

This is my first post, and also I`m posting from the country and not having access to my desktop, so please excuse me if I forget some detail.

Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/"

I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think.

After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.

Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ).

The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.

I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I readd devices it syncs the disks and after about 3 hours I see fine status in mdstat.

However when I reboot, it again starts with degraded array.

I get a feeling that after I readd the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.

So thats it, sorry for long post and hoping for your kind answers.
I will also be able to provide more information when I get back to my desktop.

Thanks,
Ilya.

kalujny 06-21-2010 01:24 AM

OK, I edited /etc/mdadm.conf to remove duplicate line for md0, called update-initramfs -u, set one of TB drives as boot drive in BIOS, resynced array and now it starts fine.

Not sure whichever ( if any ) of those did the trick.


All times are GMT -5. The time now is 02:41 PM.