md: kicking non-fresh sda6 from array!
Hello,
I have some raid1 failures on my computer. How can I fix this? # dmesg | grep md ata1: SATA max UDMA/133 cmd 0xBC00 ctl 0xB882 bmdma 0xB400 irq 193 ata2: SATA max UDMA/133 cmd 0xB800 ctl 0xB482 bmdma 0xB408 irq 193 md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: raid1 personality registered as nr 3 md: md2 stopped. md: bind<sdb9> md: bind<sda9> raid1: raid set md2 active with 2 out of 2 mirrors md: md1 stopped. md: bind<sda6> md: bind<sdb6> md: kicking non-fresh sda6 from array! md: unbind<sda6> md: export_rdev(sda6) raid1: raid set md1 active with 1 out of 2 mirrors md: md0 stopped. md: bind<sda5> md: bind<sdb5> md: kicking non-fresh sda5 from array! md: unbind<sda5> md: export_rdev(sda5) raid1: raid set md0 active with 1 out of 2 mirrors EXT3 FS on md2, internal journal EXT3 FS on md0, internal journal EXT3 FS on md1, internal journal # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[1] 4883648 blocks [2/1] [_U] md1 : active raid1 sdb6[1] 51761280 blocks [2/1] [_U] md2 : active raid1 sda9[0] sdb9[1] 102799808 blocks [2/2] [UU] unused devices: <none> # e2fsck /dev/sda5 e2fsck 1.37 (21-Mar-2005) /usr: clean, 18653/610432 files, 96758/1220912 blocks (check in 3 mounts) # e2fsck /dev/sda6 e2fsck 1.37 (21-Mar-2005) /var: clean, 7938/6471680 files, 350458/12940320 blocks (check in 3 mounts) |
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:
/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5 /sbin/mdadm /dev/md0 --add /dev/sda5 /sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6 /sbin/mdadm /dev/md1 --add /dev/sda6 |
Yes, that is exactly what happend. There was a problem with a UPS.
Problem solved and everyone happy. Thanks! |
Quote:
CD |
This came in handy for me too - I had a bad shutdown recently and my array didn't come back on it's own. ...I thought I had lost a disk! (67.7% recovered and climbing - Whooooohoo!)
|
Same here. This thread saved my day :)
Now my raid is syncing since sda6 and sda5 failed. Personalities : [raid1] md0 : active raid1 sda6[2] sdb6[1] 238275968 blocks [2/1] [_U] [==>..................] recovery = 10.2% (24469056/238275968) finish=64.3min speed=55398K/sec md2 : active raid1 sda5[0] sdb5[1] 5855552 blocks [2/2] [UU] |
Just helped me. Thanks!
|
Yessss!
And helped me just now - THANKS!
The RAID didn't start because the one controller was behind the other one after a power failure, thus 4/8 drives were called "non-fresh". Therefores he array didn't start and (in my case, anyway) the --fail and --remove were not necessary (mdadm tried to start the array on 4 drives and failed, of course). Did an --add on all four drives, kick-started the RAID via sudo mdadm -R /dev/md0 , mounted it again: sudo mount /dev/md0 /media/raid/ and everything was back in line. Joy! :-D Ciao, Klaus PS: My request for detailed information returned a weird error message - here's the complete output: klaus@GoLem:~$ sudo mdadm --query --detail /dev/md0 mdadm: Unknown keyword devices=/dev/sde,/dev/sda,/dev/sdb,/dev/sdg,/dev/sdh,/dev/sdf,/dev/sdd,/dev/sdc /dev/md0: Version : 00.90.03 Creation Time : Sat Sep 3 10:36:14 2005 Raid Level : raid5 Array Size : 1709388800 (1630.20 GiB 1750.41 GB) Used Dev Size : 244198400 (232.89 GiB 250.06 GB) Raid Devices : 8 Total Devices : 8 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Jan 20 20:40:02 2008 State : clean Active Devices : 8 Working Devices : 8 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 0ce38b42:cda216f1:5c8ccd86:cfb0a564 Events : 0.281514 Number Major Minor RaidDevice State 0 8 96 0 active sync /dev/sdg 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 128 3 active sync /dev/sdi 4 8 144 4 active sync /dev/sdj 5 8 112 5 active sync /dev/sdh 6 8 80 6 active sync /dev/sdf 7 8 64 7 active sync /dev/sde That "unknown keyword" at the top is weird - do I perhaps have some error in my config file? After all, the array is running nicely despite this... |
Quote:
Thanks |
Quote:
Also did a SMART test just to make sure things are ok: smartctl -t long /dev/sda smartctl -l selftest /dev/hda |
Quote:
|
Hi,
today I ran into the same problem and the post helped me to solved out the problem. Best Thanks regards j0inty Code:
cicero ~ # dmesg |
This thread just saved my bacon. I followed klausbreuer's variation, because my array was raid5 and so was his.
So what happened was, a controller went offline taking 2 drives with it (out of a 6 drive raid5 array...ouch!) I got the dreaded "kicking non-fresh" message for those 2 drives in the logs (upon reboot). I KNEW at the time of the controller going down, that there was no data being written to the array, as the array is just storage, and does not contain the operating system ... so I thought maybe I had a chance. So I added the two dropped members like klausbreur posted (which is based off what macemoneta posted): mdadm /dev/md0 --add /dev/hdg1 (console gave me a "re-added" message) mdadm /dev/md0 --add /dev/hde1 (console gave me another "re-added message) Then finally I did a: mdadm -R /dev/md0 No errors, so I did a "cat /proc/mdstat" , which showed the usual 6 drives up with the: [UUUUUU] I then mounted the array in it's usual spot and it was all there. Many thanks to macemoneta for providing a solid answer to build off of, and many thanks to klausbreur for posting his version...:D |
It helps me too =)
After I setted up RAID-1 I begin testing its. I halted server and pluged out 1st SATA-driver. Then, I on power and system load well. After that, I did same thing with second SATA and evrethig OK. Then I plugged back sacond SATA and starts. On startup kernel warnigs that some md starts just with 1 driver.
So when I do dmesg I get: Code:
leopard:~# dmesg --fail --remove In my case just do --add Code:
leopard:~# mdadm /dev/md3 --add /dev/sdb5 Code:
leopard:~# cat /proc/mdstat Thanx!!! |
Thank you!
This still helps several years after thread started ;-) |
All times are GMT -5. The time now is 11:32 PM. |