LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 06-20-2010, 05:06 AM   #1
kalujny
LQ Newbie
 
Registered: Jun 2010
Posts: 3

Rep: Reputation: 0
Unhappy Debian testing: RAID 1 with 2 disks starts degraded after each reboot from 3rd disk.


Hello All,

This is my first post, and also I`m posting from the country and not having access to my desktop, so please excuse me if I forget some detail.

Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/"

I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think.

After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.

Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ).

The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.

I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I readd devices it syncs the disks and after about 3 hours I see fine status in mdstat.

However when I reboot, it again starts with degraded array.

I get a feeling that after I readd the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.

So thats it, sorry for long post and hoping for your kind answers.
I will also be able to provide more information when I get back to my desktop.

Thanks,
Ilya.

Last edited by kalujny; 06-20-2010 at 05:10 AM.
 
Old 06-21-2010, 01:24 AM   #2
kalujny
LQ Newbie
 
Registered: Jun 2010
Posts: 3

Original Poster
Rep: Reputation: 0
OK, I edited /etc/mdadm.conf to remove duplicate line for md0, called update-initramfs -u, set one of TB drives as boot drive in BIOS, resynced array and now it starts fine.

Not sure whichever ( if any ) of those did the trick.
 
  


Reply

Tags
degraded, mdadm, raid



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid 1 degraded cferron Linux - Server 6 10-19-2008 10:15 AM
RAID 1 Degraded Array gsoft Debian 2 08-18-2006 02:17 PM
Software raid 5 always missing 1 disk after reboot birkinshawc Linux - Hardware 0 05-20-2004 10:57 PM
Where to mount 2nd and 3rd disks vrillusions Linux - Hardware 3 11-29-2003 05:13 PM
Software Raid Setup Ok - Reboot fails on disk failure test ikke Linux - General 2 05-11-2003 06:42 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 05:30 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration