LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-02-2012, 10:50 PM   #1
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Rep: Reputation: 0
software raid1 array fails to activate on boot


running ubuntu. Have a raid 1 array that upon reboot shows as inactive. sudo mdadm -R /dev/md0 restarts array with one failed disk. recover second disk and all is fine till I reboot and get a message that array md0 is not active and asks if I want to mount manually or skip. any suggestions on how to make array active at boot?

Code:
jess@NAS:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : inactive sdc[1](S) sdb1[0](S)
      3907026432 blocks

unused devices: <none>
jess@NAS:~$ sudo mdadm -R /dev/md0
[sudo] password for jess:
mdadm: started /dev/md0
jess@NAS:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : active raid1 sdb1[0]
      1953511936 blocks [2/1] [U_]

unused devices: <none>

 cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : active raid1 sdc1[2] sdb1[0]
      1953511936 blocks [2/1] [U_]
      [>....................]  recovery =  0.0% (403840/1953511936) finish=725.4min speed=44871K/sec

unused devices: <none>
 
Old 01-03-2012, 04:45 PM   #2
bernardofpc
LQ Newbie
 
Registered: Aug 2006
Posts: 10

Rep: Reputation: 1
I'm assuming /dev/md0 and /dev/md1 are all your RAIDs, and that none of them is your root (/) partition.

One thing I could imagine is that your computer reboots before it finishes to sync your both partitions (that's 12h it's showing you). If a RAID is stopped before it completely syncs, then it (usually) starts over from the beginning to avoid data corruption. So I recommend you to plug your computer on a reliable power source (UPS is best) in the morning, do your mdadm -R and see the result at night.

If you have already synced and it still refuses to work as you expect : check if the partitions sdb1 and sdc1 have "Linux RAID autodetec" type (by fdisk -l /dev/sdb /dev/sdc or any other disk partition utility you have). If they are already "autodetect", then there may be a bug in your /etc/mdadm.conf file, could you paste it then?

Good luck in the magic realm of RAIDs,
 
Old 01-03-2012, 06:34 PM   #3
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
yeah, it syncs up just fine. here's the latest.
Code:
jess@NAS:/mnt/storage$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : active raid1 sdc1[1] sdb1[0]
      1953511936 blocks [2/2] [UU]

unused devices: <none>
jess@NAS:/mnt/storage$
as for the formatting, I know it's right because it's been working fine up until I accidentally unplugged two cables while installing an additional hdd.
Code:
jess@NAS:/mnt/storage$ sudo fdisk -l /dev/sdb /dev/sdc
[sudo] password for jess:

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa76e8de3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa99677fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect
jess@NAS:/mnt/storage$
interesting, I have been checking my mdadm.conf religiously and it seems every time I add sdc1 back it adds it to mdadm.conf. I have removed those and restarted mdadm.
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sdc1 #(removed these two)

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR jess@daisychick.com
MAILFROM nas-server - mdadm

# definitions of existing MD arrays
# This file was auto-generated on Tue, 25 Jan 2011 04:21:13 -0600
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=306e2114:444e0ff2:cced5de7:ca715931
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c4989226:40ca5381:cced5de7:ca715931
Thanks in advance for the help!
 
Old 01-04-2012, 04:03 PM   #4
bernardofpc
LQ Newbie
 
Registered: Aug 2006
Posts: 10

Rep: Reputation: 1
Quote:
Originally Posted by daisychick View Post
yeah, it syncs up just fine. here's the latest.
Code:
jess@NAS:/mnt/storage$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : active raid1 sdc1[1] sdb1[0]
      1953511936 blocks [2/2] [UU]

unused devices: <none>
jess@NAS:/mnt/storage$
Ok, so it's not that part...

Quote:
Originally Posted by daisychick View Post
as for the formatting, I know it's right because it's been working fine up until I accidentally unplugged two cables while installing an additional hdd.
Strange, but there must lie the answer.

Quote:
Originally Posted by daisychick View Post
interesting, I have been checking my mdadm.conf religiously and it seems every time I add sdc1 back it adds it to mdadm.conf.
Very strange, I have never seen that behavior.

Quote:
Originally Posted by daisychick View Post
I have removed those and restarted mdadm.
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sdc1 #(removed these two)

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR jess@daisychick.com
MAILFROM nas-server - mdadm

# definitions of existing MD arrays
# This file was auto-generated on Tue, 25 Jan 2011 04:21:13 -0600
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=306e2114:444e0ff2:cced5de7:ca715931
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c4989226:40ca5381:cced5de7:ca715931
Some suggestions here (without really knowing):
- You could remove the word "partitions" from the DEVICE section. It's redundant, and adds extra work. If you know all partitions valid, no need to have a catch-all like that. Less info, less bugs possible.
- I also would put the devices on each raid, so a line like devices=/dev/sdc1,/dev/sdb1 for one, devices=/dev/sde1,/dev/sdd1 for the other.
- As you said this happened when you switched cables, you could try and see the output of mdadm --detail --scan and see if the UUIDs match...

Cheers,
 
Old 01-04-2012, 04:46 PM   #5
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
the uuids match.

jess@NAS:~$ sudo mdadm --detail --scan
ARRAY /dev/md0 metadata=0.90 UUID=306e2114:444e0ff2:cced5de7:ca715931
ARRAY /dev/md1 metadata=0.90 UUID=c4989226:40ca5381:cced5de7:ca715931
jess@NAS:~$
 
Old 01-04-2012, 06:32 PM   #6
Gomer_X
LQ Newbie
 
Registered: Jan 2012
Location: Ohio
Distribution: Debian, CentOS, Fedora, LFS
Posts: 24

Rep: Reputation: Disabled
Quote:
Originally Posted by daisychick View Post
interesting, I have been checking my mdadm.conf religiously and it seems every time I add sdc1 back it adds it to mdadm.conf. I have removed those and restarted mdadm.
Have you tried leaving mdadm.conf alone and then rebooting? These partitions are being added to mdadm.conf so they'll be scanned and automounted at boot. Not sure why you'd remove those partitions from the list.
 
Old 01-05-2012, 04:29 AM   #7
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
it lists the same partition three times. after I reboot and the array doesn't come up active, I activate it and then have to add that partition back in. the mdadm.conf starts off with just one partition but then after I add it, it shows it twice. I think it's probably of because how I'm adding it in but I'm not sure. Just to verify, what's the mdadm command to add a partition to an array?
 
Old 01-06-2012, 02:41 AM   #8
bernardofpc
LQ Newbie
 
Registered: Aug 2006
Posts: 10

Rep: Reputation: 1
Quote:
Originally Posted by daisychick View Post
Just to verify, what's the mdadm command to add a partition to an array?
It depends on when you add. If the devices show as failed (not your case), then --add or --re-add (safer). If you are changing disks, --add. And upon creation, you need nothing of this: it's on the command line.

I don't think this is the right thing to do anyway when creating a RAID1 with only 2 devices. At the first step, you don't "add devices to the RAID", but rather you "create a RAID from 2 devices". Strange enough, you had those devices working before, so I assume you have (valuable) data in it, otherwise I'd just scrub the RAID down and start over with a mdadm --create /dev/mdX --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1.

Reading your first post more carefully, I realized your "sleeping" RAID was in fact showing two spares and no "true". So perhaps it's just that you started your RAID and added those disks after, and they were marked as spares, not the "main" ones. The strange part is that mdadm seeks no other disks for the main part, so I really don't know what it means.

If you really need to recreate the RAID, then the situation is going to be a little delicate. You'd have to
- mark one of /dev/sd[bc]1 as failed in the semi-working RAID, then remove it from the RAID
- create a new RAID1 with a *missing* device upon the removed disk
- copy all data from the old to the new RAID1
- destroy the old RAID1
- hot-add the other disk to the new RAID, wait for sync

Perhaps there's a way to just "change the metadata" for the RAIDs, promoting the two "spares" to "main" and do not change the data on it, but I'm not sure.
 
Old 01-07-2012, 04:05 AM   #9
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
wow. yeah. that would suck because I have 1.8 TB of movies and tv shows that I would really hate to lose.
 
Old 01-20-2012, 04:16 PM   #10
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
Still having the same problem. Can't find a resolution. Here's the latest:

Raid 1 array not activating at boot. have to manually restart the array, mount it, then add the second drive to as it shows 1 drive failed. it recovers and works fine... until reboot. Then have to start all over again. The error message is:

Code:
init: ureadahead-other main process (404) terminated with status 4
The disk drive for /mnt/storage is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery
I press s, let it boot, and do this to reactivate it
Code:
sudo mdadm -R /dev/md0
sudo mount /dev/md0 /mnt/storage
here's some drive info
Code:
sudo blkid
/dev/sda1: UUID="a1a91b1f-6d2b-462d-84e6-46e949211979" TYPE="ext2"
/dev/sda5: UUID="rTyUA4-GtXV-EPbg-25AR-ydjT-5Me0-UP2k6E" TYPE="LVM2_member"
/dev/mapper/NAS-root: UUID="92cb1a8c-3068-453b-ac92-1850fe98811c" TYPE="ext4"
/dev/sdb1: UUID="306e2114-444e-0ff2-cced-5de7ca715931" TYPE="linux_raid_member"
/dev/mapper/NAS-swap_1: UUID="c8938871-ee53-4356-afad-62ae666c5de6" TYPE="swap"
/dev/md0: UUID="894c0448-f517-4c86-821d-ebcfab67278a" TYPE="ext3"
/dev/sdc1: UUID="306e2114-444e-0ff2-cced-5de7ca715931" TYPE="linux_raid_member"
/dev/sdd1: UUID="c4989226-40ca-5381-cced-5de7ca715931" TYPE="linux_raid_member"
/dev/sde1: UUID="c4989226-40ca-5381-cced-5de7ca715931" TYPE="linux_raid_member"
/dev/md1: UUID="c3320348-3937-4bb9-920a-8d0a693ddb7d" TYPE="ext4"
fdisk -l
Code:
Disk /dev/sda: 8119 MB, 8119738368 bytes
255 heads, 63 sectors/track, 987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b7d58

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          32      248832   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              32         988     7677953    5  Extended
Partition 2 does not end on cylinder boundary.
/dev/sda5              32         988     7677952   8e  Linux LVM

Disk /dev/dm-0: 7470 MB, 7470055424 bytes
255 heads, 63 sectors/track, 908 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa99677fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/dm-1: 390 MB, 390070272 bytes
255 heads, 63 sectors/track, 47 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/md0: 2000.4 GB, 2000396222464 bytes
2 heads, 4 sectors/track, 488377984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa76e8de3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x934a5078

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2236a1e8

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/md1: 1000.2 GB, 1000202174464 bytes
2 heads, 4 sectors/track, 244189984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table
/etc/mdadm.conf
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sdb1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR no@spam.com
MAILFROM nas-server - mdadm

# definitions of existing MD arrays
# This file was auto-generated on Tue, 25 Jan 2011 04:21:13 -0600
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=306e2114:444e0ff2:cced5de7:ca715931
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c4989226:40ca5381:cced5de7:ca715931
/etc/fstab
Code:
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/mapper/NAS-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=a1a91b1f-6d2b-462d-84e6-46e949211979 /boot           ext2    defaults        0       2
/dev/mapper/NAS-swap_1 none            swap    sw              0       0
#
/dev/scd0               /media/cdrom    udf,iso9660 user,noauto,exec,utf8       0       0
/dev/md0                /mnt/storage    ext3    defaults        0       0
/dev/md1                /mnt/backup     ext4    defaults        0       0
cat /proc/mdstat
Code:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sde1[1] sdd1[0]
      976759936 blocks [2/2] [UU]

md0 : active raid1 sdb1[2] sdc1[0]
      1953511936 blocks [2/1] [U_]
      [>....................]  recovery =  4.3% (84445056/1953511936) finish=682.0min speed=45669K/sec
sudo mdadm --detail --scan
Code:
ARRAY /dev/md0 metadata=0.90 spares=1 UUID=306e2114:444e0ff2:cced5de7:ca715931
ARRAY /dev/md1 metadata=0.90 UUID=c4989226:40ca5381:cced5de7:ca715931
Suggestions?
 
Old 01-24-2012, 03:52 PM   #11
daisychick
Member
 
Registered: Nov 2006
Location: Texas
Distribution: ubuntu 12.04 LTS
Posts: 154

Original Poster
Rep: Reputation: 0
bamf. This is driving me nuts. Anyone have any ideas?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid1 array won't start on boot brew1brew Ubuntu 12 01-10-2011 09:30 AM
RAID1 array rebuild fails at 99.9% recovery apomatix Linux - Hardware 3 06-06-2008 07:30 AM
Kernel panic during boot with software RAID1 Petevs Linux - General 2 09-11-2007 07:41 PM
software RAID1; system doesn't boot kristof_v Linux - Software 1 06-19-2007 10:13 AM
LXer: Replacing A Failed Hard Drive In A Software RAID1 Array LXer Syndicated Linux News 0 01-30-2007 01:33 PM


All times are GMT -5. The time now is 11:03 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration