LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Degraded raid1 array problem on slack 12.0.0 / mdadm (https://www.linuxquestions.org/questions/slackware-14/degraded-raid1-array-problem-on-slack-12-0-0-mdadm-589708/)

slack-12.0.0 10-05-2007 12:21 PM

Degraded raid1 array problem on slack 12.0.0 / mdadm
 
Versioning:
Lilo - LILO version 22.8
mdadm - v2.6.1 - 22nd February 2007
uname - Linux 2.6.21.5-smp #2 (stock slackware install kernel).

Background info:
I'm trying to create 3 raid1 devices. Primary disk /dev/sda, secondary disk /dev/sdd. Slack was installed on /dev/sda a while back, now trying to add raid post-install. The two disks are identical 74GB Raptors. Setup as follows:

Array Primary Partition Secondary Partition Mount Point
/dev/md0 sda1 sdd1 /
/dev/md1 sda2 sdd2 /var
/dev/md2 sda3 sdd3 swap

PROBLEM:
Having created a degraded raid1 array on /dev/sdd, and copying the filesystem across, and altering Lilo and FSTAB to boot/mount from the md0 array (Everything okay so far..) I cannot add the original partition (sda1) to the md0 array. I've spent 6hrs on this single problem alone today, any help will be much appreciated.


[host]$: cat /proc/mdstat
Personalities : (removed for legibility)
md1 : active raid1 sdd2[1] sda2[0]
14651200 blocks [2/2] [UU]

md2 : active raid1 sdd3[1] sda3[0]
1951808 blocks [2/2] [UU]

md0 : active raid1 sdd1[1]
56002432 blocks [2/1] [_U]


unused devices: <none>

[host]$: mdadm /dev/md0 -a /dev/sda1
mdadm: Cannot open /dev/sda1: Device or resource busy

[host]$: mount
/dev/md/0 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
usbfs on /proc/bus/usb type usbfs (rw)
...Not sure, but could these by running on the old root partition still, sda1?

/dev/md/1 on /var type ext3 (rw)

[host]$: grep sda1 /var/log/dmesg
sda: sda1 sda2 sda3
md: invalid raid superblock magic on sda1
md: sda1 has invalid sb, not importing!
EXT3 FS on sda1, internal journal

[host]$: mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.

[host]$: mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Oct 5 17:37:37 2007
Raid Level : raid1
Array Size : 56002432 (53.41 GiB 57.35 GB)
Used Dev Size : 56002432 (53.41 GiB 57.35 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Oct 5 18:02:59 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : cbee3d32:2ccf30ee:d9263c9e:78e2ac93
Events : 0.10

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1

[host]$: fdisk -l /dev/sda /dev/sdd

Disk /dev/sda: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 6972 56002558+ fd Linux raid autodetect
/dev/sda2 6973 8796 14651280 fd Linux raid autodetect
/dev/sda3 8797 9039 1951897+ fd Linux raid autodetect

Disk /dev/sdd: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 6972 56002558+ fd Linux raid autodetect
/dev/sdd2 6973 8796 14651280 fd Linux raid autodetect
/dev/sdd3 8797 9039 1951897+ fd Linux raid autodetect


Thanks in advance for any replies and happy to provide more info as needed.

slack-12.0.0 10-05-2007 06:29 PM

I should really add that this is my first attempt at adding raid1 under Linux, to a live system.

Thus, the root cause is most likely a very simple step that will hopefully be apparent to an experienced raid1 user. If you're reading this and thinking "oh he must have done this..", dont count on it!

Also, if my question has become obscured by the level of detail in the post; it is "How can I fix whatever is stopping me adding partition /dev/sda1 to raid1 array /dev/md0".

Thanks

Chris

hutyerah 10-06-2007 05:59 AM

Try booting off your Slackware install disk and doing it there.

slack-12.0.0 10-06-2007 06:10 AM

Thanks for the reply.

I've recently used Knoppix to get in and confirm the two drives match, in terms of /etc/lilo.conf and /etc/fstab.

The server is now located in a data centre and physical access is a nightmare to arrange, as well as being a fully config'd and operational production server. Reinstall'ing the OS and setting up raid at this point is an option, but one I'd rather not persue - im 400 miles from the data centre presently, and I dont think this justifies a complete reblast of the OS.

hutyerah 10-06-2007 07:56 PM

I see. Well the reason I suggested it is because that's how I did it when I set up raid 1 on a server I have. And given it's complaining about the device being in use, it seems to be what you need to do. If you can boot off knoppix again, then you should be able to do it from there too once you get a terminal. I'm pretty sure knoppix would have mdadm, and if not you can always put it on there. You definitely don't need to reinstall the OS though.

I thought you would do it the other way around, though- add the running disk first and then the other. But I suppose if you have the right disk marked as not-in-sync (sorry, can't remember the term used) it would work. If only I could find that Slackware on raid guide I used...

slack-12.0.0 10-12-2007 06:36 AM

Problem Fixed
 
The problem seemed to be with booting and rooting to a raid array.

Decided the quickest fix was to visit the datacentre, create raid during reinstall of the OS.

In hindsight, I think the problem was Lilo related and a reinstall to MBR on the array would have fixed it. Main problem was lilo -v wouldnt update to raid, a lilo reinstall may have been better than a full OS reblast - but the opportunity for a bit of (late) spring cleaning is always nice.


All times are GMT -5. The time now is 04:22 PM.