Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
mdadm: /dev/md/0 assembled from 2 drives - not enough to start the array.
The server is Ubuntu 8.10 Server, running one HDD for the OS & four 250 GB HDDs for the RAID5 array, running strictly the server (no other OS on the box). It's been up & working since December, built with all brand-new WD drives. Suddenly, it stopped, wouldn't let me rename a folder (from my WinXP box). So, I shutdown the server & rebooted it, and now the array won't assemble.
Any ideas how to get it back, or at least get the data online long enough to get a backup - I have a blank 1 TB drive installed in my WinXP box, ready to copy the data.
Looks like 2 of the drives in your raid5 aren't present, you could try booting off cd and seeing if all drives are actually detected. You didnt mention the drive technology.. are some/all of them scsi ? Could you please give us the disk connection layout ?
Thanks for the response. I've just rebooted, checked bios, and all 5 drives show to be present. All 5 drives are SATA2, the motherboard is an ASUS, with 5 SATA slots. The drives are configured in BIOS as SATA, even tho it will configure fake-raids. The raid is fully configured by the OS. All 4 of the RAID5 drives are model# WD2500AAKS.
What CD should I use to boot from? I can definitely build a bootable cd, if I know what I need to have on it.
If they're all detected, try running 'mdadm -Af /dev/md0', this will (f)orce (A)ssemble the array depending on the cause of the original failure... I think its unlikely that you had 2 simultaneous disk failures so hopefully you'll get your data back... (just had a thought, I hope one hadn't quietly failed some time back.. ) good luck
root@RCH-SERVER:/etc/mdadm# mdadm -Af /dev/md0
mdadm: /dev/md0 not identified in config file.
My mdadm.conf file shows:
Quote:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# This file was auto-generated on Sun, 11 Jan 2009 19:39:47 -0600
"mdadm.conf" 23L, 729C
At fist blush your mdadm.conf file seems to have an error....try replacing the following line...
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 level=raid5 metadata=1.0 num-devices=4 UUID=13782a18:85c82f51:e999ccd0:c2ca0614 name=0
# This file was auto-generated on Sun, 11 Jan 2009 19:39:47 -0600
"mdadm.conf" 23L, 729C
Custangro, I changed it, but there's no apparent change:
Quote:
root@RCH-SERVER:/home/admiral# mdadm --assemble --scan
mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.
Is there a place with good instructions on building a LIVE CD from which I can boot the system? I HAVE a 2nd 80gb OS HDD that I built in December, and had the system where I could swap them out & not lose a second (except reboot time). I tried that with no changes in the results. Would a LIVE CD be the same thing? Or would it be different?
I can also built one using OpenSuSE & a GUI that would allow me to reassemble the RAID5 array without losing data - That's how I started this a year ago, building the system with OpenSuSE & then getting dissatisfied with it's workings and going to Ubuntu Server 8.10, but using the same RAID5 array created when OpenSuSE was built. I could go back and start that over, if that will help.
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x41413535
Device Boot Start End Blocks Id System
/dev/sda1 1 30401 244196001 fd Linux raid autodetect
Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002d3a2
Device Boot Start End Blocks Id System
/dev/sdb1 1 30401 244196001 fd Linux raid autodetect
Disk /dev/sdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00010f8f
Device Boot Start End Blocks Id System
/dev/sdc1 1 30401 244196001 fd Linux raid autodetect
Disk /dev/sdd: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00055741
Device Boot Start End Blocks Id System
/dev/sdd1 1 9327 74919096 83 Linux
/dev/sdd2 9328 9729 3229065 5 Extended
/dev/sdd5 9328 9729 3229033+ 82 Linux swap / Solaris
Disk /dev/sde: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b3fcd
Device Boot Start End Blocks Id System
/dev/sde1 1 30401 244196001 fd Linux raid autodetect
I HAD a RAID5 array when I started renaming a few photograph folders. It's ACTING like 2 of the 4 disks have failed, but I can't find any physical evidence thereof, and I can't imagine that two HDDs, less than 9 months old, would have failed simultaneously - after all, this is a home server, not an enterprise machine. I could be wrong about the 2 (not) failing simultaneously, but it just looks like something marked them as bad (or busy) so that they won't load.
I HAD a RAID5 array when I started renaming a few photograph folders. It's ACTING like 2 of the 4 disks have failed, but I can't find any physical evidence thereof, and I can't imagine that two HDDs, less than 9 months old, would have failed simultaneously - after all, this is a home server, not an enterprise machine. I could be wrong about the 2 (not) failing simultaneously, but it just looks like something marked them as bad (or busy) so that they won't load.
Thanks,
David
I'm not _too_ familiar with mdadm...so this is a long shot...try replacing the "ARRAY" line with something like this (reboot after making changes)...
I downloaded & installed SmartMonTools, and ran diagnostics. It is consistently showing the 3 drives /dev/sda, /dev/sdb, & /dev/sde are fine (/dev/sdd is the OS drive, so that one doesn't count), but that /dev/sdc has a consistent read error.
So, I ordered a new drive, exact same model # as the original 4, and it'll be in tomorrow. After I get off work tomorrow, I'll unmount the old drive, fsdisk the new one to match one of the others, and then mount the new one, and it SHOULD have my data back & operational before bed time.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.