Degraded raid1 array problem on slack 12.0.0 / mdadm
SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Degraded raid1 array problem on slack 12.0.0 / mdadm
Lilo - LILO version 22.8
mdadm - v2.6.1 - 22nd February 2007
uname - Linux 18.104.22.168-smp #2 (stock slackware install kernel).
I'm trying to create 3 raid1 devices. Primary disk /dev/sda, secondary disk /dev/sdd. Slack was installed on /dev/sda a while back, now trying to add raid post-install. The two disks are identical 74GB Raptors. Setup as follows:
Array Primary Partition Secondary Partition Mount Point
/dev/md0 sda1 sdd1 /
/dev/md1 sda2 sdd2 /var
/dev/md2 sda3 sdd3 swap
Having created a degraded raid1 array on /dev/sdd, and copying the filesystem across, and altering Lilo and FSTAB to boot/mount from the md0 array (Everything okay so far..) I cannot add the original partition (sda1) to the md0 array. I've spent 6hrs on this single problem alone today, any help will be much appreciated.
[host]$: cat /proc/mdstat
Personalities : (removed for legibility)
md1 : active raid1 sdd2 sda2
14651200 blocks [2/2] [UU]
md2 : active raid1 sdd3 sda3
1951808 blocks [2/2] [UU]
md0 : active raid1 sdd1
56002432 blocks [2/1] [_U]
unused devices: <none>
[host]$: mdadm /dev/md0 -a /dev/sda1
mdadm: Cannot open /dev/sda1: Device or resource busy
/dev/md/0 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
usbfs on /proc/bus/usb type usbfs (rw)
...Not sure, but could these by running on the old root partition still, sda1?
/dev/md/1 on /var type ext3 (rw)
[host]$: grep sda1 /var/log/dmesg
sda: sda1 sda2 sda3
md: invalid raid superblock magic on sda1
md: sda1 has invalid sb, not importing!
EXT3 FS on sda1, internal journal
[host]$: mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.
[host]$: mdadm --detail /dev/md0
Version : 00.90.03
Creation Time : Fri Oct 5 17:37:37 2007
Raid Level : raid1
Array Size : 56002432 (53.41 GiB 57.35 GB)
Used Dev Size : 56002432 (53.41 GiB 57.35 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Oct 5 18:02:59 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
I should really add that this is my first attempt at adding raid1 under Linux, to a live system.
Thus, the root cause is most likely a very simple step that will hopefully be apparent to an experienced raid1 user. If you're reading this and thinking "oh he must have done this..", dont count on it!
Also, if my question has become obscured by the level of detail in the post; it is "How can I fix whatever is stopping me adding partition /dev/sda1 to raid1 array /dev/md0".
I've recently used Knoppix to get in and confirm the two drives match, in terms of /etc/lilo.conf and /etc/fstab.
The server is now located in a data centre and physical access is a nightmare to arrange, as well as being a fully config'd and operational production server. Reinstall'ing the OS and setting up raid at this point is an option, but one I'd rather not persue - im 400 miles from the data centre presently, and I dont think this justifies a complete reblast of the OS.
I see. Well the reason I suggested it is because that's how I did it when I set up raid 1 on a server I have. And given it's complaining about the device being in use, it seems to be what you need to do. If you can boot off knoppix again, then you should be able to do it from there too once you get a terminal. I'm pretty sure knoppix would have mdadm, and if not you can always put it on there. You definitely don't need to reinstall the OS though.
I thought you would do it the other way around, though- add the running disk first and then the other. But I suppose if you have the right disk marked as not-in-sync (sorry, can't remember the term used) it would work. If only I could find that Slackware on raid guide I used...
The problem seemed to be with booting and rooting to a raid array.
Decided the quickest fix was to visit the datacentre, create raid during reinstall of the OS.
In hindsight, I think the problem was Lilo related and a reinstall to MBR on the array would have fixed it. Main problem was lilo -v wouldnt update to raid, a lilo reinstall may have been better than a full OS reblast - but the opportunity for a bit of (late) spring cleaning is always nice.