LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices

View Poll Results: Software raid (eg mdadm) or Hardware (eg PCI card) ?
Software Raid does it for me 6 42.86%
Software raid blows, Hardware all the way 8 57.14%
Voters: 14. You may not vote on this poll

Reply
 
Search this Thread
Old 10-05-2007, 12:21 PM   #1
slack-12.0.0
LQ Newbie
 
Registered: Oct 2007
Posts: 4

Rep: Reputation: 0
Post Degraded raid1 array problem on slack 12.0.0 / mdadm


Versioning:
Lilo - LILO version 22.8
mdadm - v2.6.1 - 22nd February 2007
uname - Linux 2.6.21.5-smp #2 (stock slackware install kernel).

Background info:
I'm trying to create 3 raid1 devices. Primary disk /dev/sda, secondary disk /dev/sdd. Slack was installed on /dev/sda a while back, now trying to add raid post-install. The two disks are identical 74GB Raptors. Setup as follows:

Array Primary Partition Secondary Partition Mount Point
/dev/md0 sda1 sdd1 /
/dev/md1 sda2 sdd2 /var
/dev/md2 sda3 sdd3 swap

PROBLEM:
Having created a degraded raid1 array on /dev/sdd, and copying the filesystem across, and altering Lilo and FSTAB to boot/mount from the md0 array (Everything okay so far..) I cannot add the original partition (sda1) to the md0 array. I've spent 6hrs on this single problem alone today, any help will be much appreciated.


[host]$: cat /proc/mdstat
Personalities : (removed for legibility)
md1 : active raid1 sdd2[1] sda2[0]
14651200 blocks [2/2] [UU]

md2 : active raid1 sdd3[1] sda3[0]
1951808 blocks [2/2] [UU]

md0 : active raid1 sdd1[1]
56002432 blocks [2/1] [_U]


unused devices: <none>

[host]$: mdadm /dev/md0 -a /dev/sda1
mdadm: Cannot open /dev/sda1: Device or resource busy

[host]$: mount
/dev/md/0 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
usbfs on /proc/bus/usb type usbfs (rw)
...Not sure, but could these by running on the old root partition still, sda1?

/dev/md/1 on /var type ext3 (rw)

[host]$: grep sda1 /var/log/dmesg
sda: sda1 sda2 sda3
md: invalid raid superblock magic on sda1
md: sda1 has invalid sb, not importing!
EXT3 FS on sda1, internal journal

[host]$: mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.

[host]$: mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Oct 5 17:37:37 2007
Raid Level : raid1
Array Size : 56002432 (53.41 GiB 57.35 GB)
Used Dev Size : 56002432 (53.41 GiB 57.35 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Oct 5 18:02:59 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : cbee3d32:2ccf30ee:d9263c9e:78e2ac93
Events : 0.10

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1

[host]$: fdisk -l /dev/sda /dev/sdd

Disk /dev/sda: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 6972 56002558+ fd Linux raid autodetect
/dev/sda2 6973 8796 14651280 fd Linux raid autodetect
/dev/sda3 8797 9039 1951897+ fd Linux raid autodetect

Disk /dev/sdd: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 6972 56002558+ fd Linux raid autodetect
/dev/sdd2 6973 8796 14651280 fd Linux raid autodetect
/dev/sdd3 8797 9039 1951897+ fd Linux raid autodetect


Thanks in advance for any replies and happy to provide more info as needed.

Last edited by slack-12.0.0; 10-05-2007 at 12:29 PM. Reason: Added fdisk output
 
Old 10-05-2007, 06:29 PM   #2
slack-12.0.0
LQ Newbie
 
Registered: Oct 2007
Posts: 4

Original Poster
Rep: Reputation: 0
I should really add that this is my first attempt at adding raid1 under Linux, to a live system.

Thus, the root cause is most likely a very simple step that will hopefully be apparent to an experienced raid1 user. If you're reading this and thinking "oh he must have done this..", dont count on it!

Also, if my question has become obscured by the level of detail in the post; it is "How can I fix whatever is stopping me adding partition /dev/sda1 to raid1 array /dev/md0".

Thanks

Chris
 
Old 10-06-2007, 05:59 AM   #3
hutyerah
Member
 
Registered: Dec 2005
Location: Brisbane, Australia
Distribution: Slackware
Posts: 39

Rep: Reputation: 16
Try booting off your Slackware install disk and doing it there.
 
Old 10-06-2007, 06:10 AM   #4
slack-12.0.0
LQ Newbie
 
Registered: Oct 2007
Posts: 4

Original Poster
Rep: Reputation: 0
Thanks for the reply.

I've recently used Knoppix to get in and confirm the two drives match, in terms of /etc/lilo.conf and /etc/fstab.

The server is now located in a data centre and physical access is a nightmare to arrange, as well as being a fully config'd and operational production server. Reinstall'ing the OS and setting up raid at this point is an option, but one I'd rather not persue - im 400 miles from the data centre presently, and I dont think this justifies a complete reblast of the OS.
 
Old 10-06-2007, 07:56 PM   #5
hutyerah
Member
 
Registered: Dec 2005
Location: Brisbane, Australia
Distribution: Slackware
Posts: 39

Rep: Reputation: 16
I see. Well the reason I suggested it is because that's how I did it when I set up raid 1 on a server I have. And given it's complaining about the device being in use, it seems to be what you need to do. If you can boot off knoppix again, then you should be able to do it from there too once you get a terminal. I'm pretty sure knoppix would have mdadm, and if not you can always put it on there. You definitely don't need to reinstall the OS though.

I thought you would do it the other way around, though- add the running disk first and then the other. But I suppose if you have the right disk marked as not-in-sync (sorry, can't remember the term used) it would work. If only I could find that Slackware on raid guide I used...
 
Old 10-12-2007, 06:36 AM   #6
slack-12.0.0
LQ Newbie
 
Registered: Oct 2007
Posts: 4

Original Poster
Rep: Reputation: 0
Thumbs up Problem Fixed

The problem seemed to be with booting and rooting to a raid array.

Decided the quickest fix was to visit the datacentre, create raid during reinstall of the OS.

In hindsight, I think the problem was Lilo related and a reinstall to MBR on the array would have fixed it. Main problem was lilo -v wouldnt update to raid, a lilo reinstall may have been better than a full OS reblast - but the opportunity for a bit of (late) spring cleaning is always nice.
 
  


Reply

Tags
degraded, invalid, magic, mdadm, raid, raid1, superblock


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Degraded Array on Software Raid pcinfo-az Linux - Hardware 8 07-03-2008 10:43 AM
RAID help restoring a degraded array partition jahlewis Linux - Hardware 2 10-17-2006 07:55 PM
Trouble booting a degraded RAID-1 array aluchko Linux - Software 3 09-09-2006 10:26 PM
RAID 1 Degraded Array gsoft Debian 2 08-18-2006 02:17 PM
raid1 using mdadm? help plz akadidm Linux - Hardware 3 06-09-2005 04:58 AM


All times are GMT -5. The time now is 06:48 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration