LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 08-12-2010, 10:30 AM   #1
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Rep: Reputation: 23
Unhappy raid disk not running


Hello,


I just configured two raid setups but after a reboot they are not mounted and seem to be inactive.

md127 = sde1, sdf1 and sdi1 (raid 5)
md0 = sda1 and sdh1 (raid 0)
Code:
root@server /]# cat /proc/mdstat 
Personalities : 
md127 : inactive sdf1[1](S) sde1[2](S)
      78156032 blocks
       
md0 : inactive sda1[0](S)
      488382977 blocks super 1.2
       
unused devices: <none>
Code:
[root@server /]# fdisk -l | grep "Disk /"
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
Disk /dev/sdc: 122.9 GB, 122942324736 bytes
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
Disk /dev/sde: 40.0 GB, 40020664320 bytes
Disk /dev/sdf: 40.0 GB, 40020664320 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 251.0 GB, 251000193024 bytes
Disk /dev/sdi: 40.0 GB, 40020664320 bytes
Disk /dev/sdj: 500.1 GB, 500107862016 bytes
Code:
[root@server /]# cat /etc/mdadm.conf 
DEVICE /dev/sdi1 /dev/sdf1 /dev/sde1 /dev/sda1 /dev/sdh1
ARRAY /dev/md127 UUID=5dc0cf7a:8c715104:04894333:532a878b auto=yes
ARRAY /dev/md0 UUID=65c49170:733df717:435e470b:3334ee94 auto=yes
As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back?

To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch.

Any assistance is much appreciated!

PS. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.
 
Old 08-13-2010, 12:54 PM   #2
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
No doubt a dumb question, but have you tried assembling and mounting them manually at the command line with mdadm, something like mdadm -assemble -scan or even mdadm -assemble {manually add raid parameters and devices}

If not, you could try the posting the output of

mdadm -Ebsc partitions

and we can try to figure it out
 
Old 08-14-2010, 12:07 PM   #3
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
Hello again,

After your suggestion I dared running some mdadm commands and got it working. I had to do mdadm --stop first before assembling them with --assemble -v.

The only remaining problem is that I have to do this manually each time the computer is restarted. Is there any way to have them assembled automatically?
 
Old 08-14-2010, 12:54 PM   #4
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
You may need to edit your mdadm.conf file. Something like this:

Quote:
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=b42490f3:ada38e64:0c1c9294:00a2349e
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=3d59e000:18465b7f:052fff41:14d859df
 
Old 08-14-2010, 01:09 PM   #5
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
My mdadm.conf says:
Code:
DEVICE /dev/sdi1 /dev/sdf1 /dev/sde1 /dev/sda1 /dev/sdh1
ARRAY /dev/md127 UUID=[numbers] auto=yes
ARRAY /dev/md0 UUID=[numbers] auto=yes
But it still doesn't appear during startup. I am forced to running the -assemble manually.
 
Old 08-14-2010, 01:12 PM   #6
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
From the mdadm.conf man page:

Quote:
auto= This option declares to mdadm that it should try to create the device file of the array if it doesn't already exist, or exists but with the wrong device number.
So it says "try", no guarantee that it will succeed. As I suggested, try being more explicit by listing the details (number of devices, spares, raid level). See man mdadm.conf.
 
Old 08-14-2010, 01:25 PM   #7
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
Quote:
Originally Posted by jay73 View Post
From the mdadm.conf man page:



So it says "try", no guarantee that it will succeed. As I suggested, try being more explicit by listing the details (number of devices, spares, raid level). See man mdadm.conf.
Thanks, I'll give it go.
 
Old 08-14-2010, 02:25 PM   #8
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
my mdadm.conf now says:
Code:
DEVICE /dev/sdi1 /dev/sdf1 /dev/sde1 /dev/sda1 /dev/sdh1                                                  
ARRAY /dev/md127 num-devices=3 devices=/dev/sde1,/dev/sdf1,/dev/sdi1 level=5 UUID=5dc0cf7a:8c715104:04894333:532a878b auto=yes
ARRAY /dev/md0 num-devices=2 devices=/dev/sda1,/dev/sdh1 level=0 UUID=65c49170:733df717:435e470b:3334ee94 auto=yes
But it still doesn't work on startup.

mdadm --assemble -scan says it only finds one drive for each set.
[root@hserver /]# mdadm --assemble -scan
Code:
mdadm: /dev/md/127_0 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/holisrv3:0 assembled from 1 drive - not enough to start the array.
mdadm: No arrays found in config file or automatically
It only finds one device, and it claims there are no arrays in config file, even if they are? And what is with the stange md names?
(unless /etc/mdadm.conf is the wrong config file?)

It feels like there are some conflicting configurations somehow?
 
Old 08-14-2010, 02:28 PM   #9
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
Adding additional output as requested previously.

[root@server /]# mdadm -Ebsc partitions
ARRAY /dev/md/0 metadata=1.2 UUID=65c49170:733df717:435e470b:3334ee94 name=holisrv3:0
ARRAY /dev/md127 UUID=5dc0cf7a:8c715104:04894333:532a878b

This sure looks like another config file? But...
[root@server /]# locate mdadm.conf
/etc/mdadm.conf
/usr/share/doc/mdadm/mdadm.conf-example
/usr/share/man/man5/mdadm.conf.5.lzma

Btw. "holisrv3" is the name of the computer.

Last edited by MartenH; 08-14-2010 at 02:35 PM. Reason: Added more details
 
Old 08-14-2010, 02:44 PM   #10
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
I noticed my error of stating level=5 instead of level=raid5 and have corrected it. Have not rebooted yet.

Perhaps I should give up on salvaging the current situation, simply erase all traces of the raid sets and do them all over again but manually instead of using the guide?

What say you?

(since I could assemble them manually I have removed all my important data from them).
 
Old 08-14-2010, 03:13 PM   #11
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
If you create them manually, you are also responsible for adding entries to the mdadm.conf file so it would be useful to check whether what you have now works or not, if only to give you an idea. Re-creating the raid devices without knowing exactly what the entries should be like would be pointless.
 
Old 08-14-2010, 03:39 PM   #12
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
I have now rebooted after correcting the error with level=5. Still same result.
 
Old 08-14-2010, 04:14 PM   #13
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
Run mdadm --detail --scan and compare with what you put into mdadm.conf.
 
Old 08-14-2010, 07:13 PM   #14
MartenH
Member
 
Registered: Jul 2005
Location: Lund, Sweden
Distribution: Debian
Posts: 78

Original Poster
Rep: Reputation: 23
Ok.

I removed all sets and rebuilt everything from scratch using this guide. Still no luck.

Current details:
fdisk -l | grep "Disk /" (The disks are there)
Quote:
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
Disk /dev/sdc: 122.9 GB, 122942324736 bytes
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
Disk /dev/sde: 40.0 GB, 40020664320 bytes
Disk /dev/sdf: 40.0 GB, 40020664320 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 251.0 GB, 251000193024 bytes
Disk /dev/sdi: 40.0 GB, 40020664320 bytes
Disk /dev/sdj: 500.1 GB, 500107862016 bytes
cat /proc/mdstat (why are sdi1 and sdh1 missing?)
Quote:
Personalities :
md0 : inactive sdf1[1](S) sde1[0](S)
78154114 blocks super 1.2

md1 : inactive sda1[0](S)
488382977 blocks super 1.2

unused devices: <none>
/etc/mdadm.conf (Should I add a DEVICE line?)
Quote:
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=holisrv2:0 UUID=935f6f43:780a81ff:e0a5f47f:9a46d24d
devices=/dev/sde1,/dev/sdf1,/dev/sdi1
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=holisrv2:1 UUID=c381c719:983bc278:a033decc:30f6d7bd
devices=/dev/sda1,/dev/sdh1
mdadm --detail --scan
Quote:
mdadm: md device /dev/md1 does not appear to be active.
mdadm: md device /dev/md0 does not appear to be active.
mdadm --assemble --scan
Quote:
mdadm: /dev/md0 is already in use.
mdadm: /dev/md1 is already in use.
mdadm -Ebsc partitions
Quote:
ARRAY /dev/md/1 metadata=1.2 UUID=c381c719:983bc278:a033decc:30f6d7bd name=holisrv2:1
ARRAY /dev/md/0 metadata=1.2 UUID=935f6f43:780a81ff:e0a5f47f:9a46d24d name=holisrv2:0
I then run:
Code:
# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
# mdadm --stop /dev/md1
mdadm: stopped /dev/md1

# mdadm --assemble -v /dev/md0 /dev/sde1 /dev/sdf1 /dev/sdi1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdi1 is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/sdf1 to /dev/md0 as 1
mdadm: added /dev/sdi1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 3 drives.

# mdadm --assemble -v /dev/md1 /dev/sda1 /dev/sdh1
mdadm: looking for devices for /dev/md1
mdadm: /dev/sda1 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdh1 is identified as a member of /dev/md1, slot 1.
mdadm: added /dev/sdh1 to /dev/md1 as 1
mdadm: added /dev/sda1 to /dev/md1 as 0
mdadm: /dev/md1 has been started with 2 drives.
At this point I can mount them and use them if I want to.

After assembling manually I get the following output:

cat /proc/mdstat
Quote:
Personalities : [raid6] [raid5] [raid4] [raid0]
md1 : active raid0 sda1[0] sdh1[1]
733493248 blocks super 1.2 512k chunks

md0 : active raid5 sde1[0] sdi1[3] sdf1[1]
78153728 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
mdadm --detail --scan
Quote:
ARRAY /dev/md0 metadata=1.2 name=holisrv2:0 UUID=935f6f43:780a81ff:e0a5f47f:9a46d24d
ARRAY /dev/md1 metadata=1.2 name=holisrv2:1 UUID=c381c719:983bc278:a033decc:30f6d7bd
mdadm -Ebsc partitions
Quote:
ARRAY /dev/md/1 metadata=1.2 UUID=c381c719:983bc278:a033decc:30f6d7bd name=holisrv2:1
ARRAY /dev/md/0 metadata=1.2 UUID=935f6f43:780a81ff:e0a5f47f:9a46d24d name=holisrv2:0
So the question remain, why do they refuse to be assembled on startup? I can obviously assemble them manually without any problem.

PS. I noticed that the mdadm service is not started automatically. But starting it with 'service mdadm start' did no difference.
 
Old 08-14-2010, 11:59 PM   #15
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
I have used that same how-to without any problem so I am beginning to wonder whether you are not affected by a bug. Maybe it is time for a little detour. I would check with another distro.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
ICP raid controller, no automatic rebuild of raid 5 after replacing bad disk auclark@wsu.edu Linux - Newbie 3 12-14-2009 10:54 AM
Fedora 11 RAID 1, Disk failure - How to boot from the single working disk NothingSpecial Linux - Hardware 2 10-18-2009 06:20 PM
[SOLVED] Software RAID (mdadm) - RAID 0 returns incorrect status for disk failure/disk removed Marjonel Montejo Linux - General 4 10-04-2009 06:15 PM
Create a RAID-1 of the boot disk and a new disk Oskare100 Linux - Server 3 09-21-2007 07:45 PM
RAID 1: adding a new SCSI disk to existing disk.... help mgy Linux - Enterprise 4 04-17-2006 03:56 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 09:25 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration