software raid1 array fails to activate on boot
running ubuntu. Have a raid 1 array that upon reboot shows as inactive. sudo mdadm -R /dev/md0 restarts array with one failed disk. recover second disk and all is fine till I reboot and get a message that array md0 is not active and asks if I want to mount manually or skip. any suggestions on how to make array active at boot?
Code:
jess@NAS:~$ cat /proc/mdstat |
I'm assuming /dev/md0 and /dev/md1 are all your RAIDs, and that none of them is your root (/) partition.
One thing I could imagine is that your computer reboots before it finishes to sync your both partitions (that's 12h it's showing you). If a RAID is stopped before it completely syncs, then it (usually) starts over from the beginning to avoid data corruption. So I recommend you to plug your computer on a reliable power source (UPS is best) in the morning, do your mdadm -R and see the result at night. If you have already synced and it still refuses to work as you expect : check if the partitions sdb1 and sdc1 have "Linux RAID autodetec" type (by fdisk -l /dev/sdb /dev/sdc or any other disk partition utility you have). If they are already "autodetect", then there may be a bug in your /etc/mdadm.conf file, could you paste it then? Good luck in the magic realm of RAIDs, |
yeah, it syncs up just fine. here's the latest.
Code:
jess@NAS:/mnt/storage$ cat /proc/mdstat Code:
jess@NAS:/mnt/storage$ sudo fdisk -l /dev/sdb /dev/sdc Code:
# mdadm.conf |
Quote:
Quote:
Quote:
Quote:
- You could remove the word "partitions" from the DEVICE section. It's redundant, and adds extra work. If you know all partitions valid, no need to have a catch-all like that. Less info, less bugs possible. - I also would put the devices on each raid, so a line like devices=/dev/sdc1,/dev/sdb1 for one, devices=/dev/sde1,/dev/sdd1 for the other. - As you said this happened when you switched cables, you could try and see the output of mdadm --detail --scan and see if the UUIDs match... Cheers, |
the uuids match.
jess@NAS:~$ sudo mdadm --detail --scan ARRAY /dev/md0 metadata=0.90 UUID=306e2114:444e0ff2:cced5de7:ca715931 ARRAY /dev/md1 metadata=0.90 UUID=c4989226:40ca5381:cced5de7:ca715931 jess@NAS:~$ |
Quote:
|
it lists the same partition three times. after I reboot and the array doesn't come up active, I activate it and then have to add that partition back in. the mdadm.conf starts off with just one partition but then after I add it, it shows it twice. I think it's probably of because how I'm adding it in but I'm not sure. Just to verify, what's the mdadm command to add a partition to an array?
|
Quote:
I don't think this is the right thing to do anyway when creating a RAID1 with only 2 devices. At the first step, you don't "add devices to the RAID", but rather you "create a RAID from 2 devices". Strange enough, you had those devices working before, so I assume you have (valuable) data in it, otherwise I'd just scrub the RAID down and start over with a mdadm --create /dev/mdX --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1. Reading your first post more carefully, I realized your "sleeping" RAID was in fact showing two spares and no "true". So perhaps it's just that you started your RAID and added those disks after, and they were marked as spares, not the "main" ones. The strange part is that mdadm seeks no other disks for the main part, so I really don't know what it means. If you really need to recreate the RAID, then the situation is going to be a little delicate. You'd have to - mark one of /dev/sd[bc]1 as failed in the semi-working RAID, then remove it from the RAID - create a new RAID1 with a *missing* device upon the removed disk - copy all data from the old to the new RAID1 - destroy the old RAID1 - hot-add the other disk to the new RAID, wait for sync Perhaps there's a way to just "change the metadata" for the RAIDs, promoting the two "spares" to "main" and do not change the data on it, but I'm not sure. |
wow. yeah. that would suck because I have 1.8 TB of movies and tv shows that I would really hate to lose.
|
Still having the same problem. Can't find a resolution. Here's the latest:
Raid 1 array not activating at boot. have to manually restart the array, mount it, then add the second drive to as it shows 1 drive failed. it recovers and works fine... until reboot. Then have to start all over again. The error message is: Code:
init: ureadahead-other main process (404) terminated with status 4 Code:
sudo mdadm -R /dev/md0 Code:
sudo blkid Code:
Disk /dev/sda: 8119 MB, 8119738368 bytes Code:
# mdadm.conf Code:
# /etc/fstab: static file system information. Code:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] Code:
ARRAY /dev/md0 metadata=0.90 spares=1 UUID=306e2114:444e0ff2:cced5de7:ca715931 |
bamf. This is driving me nuts. Anyone have any ideas?
|
All times are GMT -5. The time now is 11:20 PM. |