LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 02-18-2006, 10:39 AM   #1
username is already
LQ Newbie
 
Registered: Dec 2005
Posts: 4

Rep: Reputation: 0
md: kicking non-fresh sda6 from array!


Hello,

I have some raid1 failures on my computer. How can I fix this?

# dmesg | grep md
ata1: SATA max UDMA/133 cmd 0xBC00 ctl 0xB882 bmdma 0xB400 irq 193
ata2: SATA max UDMA/133 cmd 0xB800 ctl 0xB482 bmdma 0xB408 irq 193
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: raid1 personality registered as nr 3
md: md2 stopped.
md: bind<sdb9>
md: bind<sda9>
raid1: raid set md2 active with 2 out of 2 mirrors
md: md1 stopped.
md: bind<sda6>
md: bind<sdb6>
md: kicking non-fresh sda6 from array!
md: unbind<sda6>
md: export_rdev(sda6)
raid1: raid set md1 active with 1 out of 2 mirrors
md: md0 stopped.
md: bind<sda5>
md: bind<sdb5>
md: kicking non-fresh sda5 from array!
md: unbind<sda5>
md: export_rdev(sda5)
raid1: raid set md0 active with 1 out of 2 mirrors
EXT3 FS on md2, internal journal
EXT3 FS on md0, internal journal
EXT3 FS on md1, internal journal


# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb5[1]
4883648 blocks [2/1] [_U]

md1 : active raid1 sdb6[1]
51761280 blocks [2/1] [_U]

md2 : active raid1 sda9[0] sdb9[1]
102799808 blocks [2/2] [UU]

unused devices: <none>


# e2fsck /dev/sda5
e2fsck 1.37 (21-Mar-2005)
/usr: clean, 18653/610432 files, 96758/1220912 blocks (check in 3 mounts)

# e2fsck /dev/sda6
e2fsck 1.37 (21-Mar-2005)
/var: clean, 7938/6471680 files, 350458/12940320 blocks (check in 3 mounts)
 
Old 02-18-2006, 11:03 PM   #2
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:

/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5
/sbin/mdadm /dev/md0 --add /dev/sda5

/sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6
/sbin/mdadm /dev/md1 --add /dev/sda6
 
1 members found this post helpful.
Old 02-19-2006, 04:04 AM   #3
username is already
LQ Newbie
 
Registered: Dec 2005
Posts: 4

Original Poster
Rep: Reputation: 0
Yes, that is exactly what happend. There was a problem with a UPS.

Problem solved and everyone happy.

Thanks!
 
Old 01-23-2007, 01:09 PM   #4
Complicated Disaster
LQ Newbie
 
Registered: Jan 2007
Posts: 25

Rep: Reputation: 15
Quote:
Originally Posted by macemoneta
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:

/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5
/sbin/mdadm /dev/md0 --add /dev/sda5

/sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6
/sbin/mdadm /dev/md1 --add /dev/sda6
Thank you!!! I had the same problem and it's now fixed!

CD
 
Old 06-03-2007, 11:09 PM   #5
the_tflk
Member
 
Registered: Jul 2003
Location: MA
Distribution: Ubuntu
Posts: 35

Rep: Reputation: 16
This came in handy for me too - I had a bad shutdown recently and my array didn't come back on it's own. ...I thought I had lost a disk! (67.7% recovered and climbing - Whooooohoo!)
 
Old 07-29-2007, 07:34 AM   #6
jostmart
LQ Newbie
 
Registered: Jul 2006
Posts: 8

Rep: Reputation: Disabled
Same here. This thread saved my day


Now my raid is syncing since sda6 and sda5 failed.


Personalities : [raid1]
md0 : active raid1 sda6[2] sdb6[1]
238275968 blocks [2/1] [_U]
[==>..................] recovery = 10.2% (24469056/238275968) finish=64.3min speed=55398K/sec

md2 : active raid1 sda5[0] sdb5[1]
5855552 blocks [2/2] [UU]

Last edited by jostmart; 07-29-2007 at 07:36 AM.
 
Old 08-12-2007, 05:46 AM   #7
gneeot
LQ Newbie
 
Registered: Jan 2006
Location: Ukraine
Distribution: Debian, Ubuntu, Fedora
Posts: 21

Rep: Reputation: 15
Just helped me. Thanks!
 
Old 01-20-2008, 01:49 PM   #8
klausbreuer
LQ Newbie
 
Registered: Jan 2008
Posts: 1

Rep: Reputation: 0
Talking Yessss!

And helped me just now - THANKS!

The RAID didn't start because the one controller was behind the other one after a power failure, thus 4/8 drives were called "non-fresh".
Therefores he array didn't start and (in my case, anyway) the --fail and --remove were not necessary (mdadm tried to start the array on 4 drives and failed, of course).

Did an --add on all four drives, kick-started the RAID via

sudo mdadm -R /dev/md0

, mounted it again:

sudo mount /dev/md0 /media/raid/

and everything was back in line. Joy! :-D

Ciao,
Klaus

PS: My request for detailed information returned a weird error message - here's the complete output:

klaus@GoLem:~$ sudo mdadm --query --detail /dev/md0
mdadm: Unknown keyword devices=/dev/sde,/dev/sda,/dev/sdb,/dev/sdg,/dev/sdh,/dev/sdf,/dev/sdd,/dev/sdc
/dev/md0:
Version : 00.90.03
Creation Time : Sat Sep 3 10:36:14 2005
Raid Level : raid5
Array Size : 1709388800 (1630.20 GiB 1750.41 GB)
Used Dev Size : 244198400 (232.89 GiB 250.06 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sun Jan 20 20:40:02 2008
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 128K

UUID : 0ce38b42:cda216f1:5c8ccd86:cfb0a564
Events : 0.281514

Number Major Minor RaidDevice State
0 8 96 0 active sync /dev/sdg
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 128 3 active sync /dev/sdi
4 8 144 4 active sync /dev/sdj
5 8 112 5 active sync /dev/sdh
6 8 80 6 active sync /dev/sdf
7 8 64 7 active sync /dev/sde

That "unknown keyword" at the top is weird - do I perhaps have some error in my config file? After all, the array is running nicely despite this...
 
Old 12-11-2008, 10:30 PM   #9
nicoechaniz
LQ Newbie
 
Registered: Dec 2008
Posts: 1

Rep: Reputation: 0
Quote:
Originally Posted by macemoneta View Post
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:

/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5
/sbin/mdadm /dev/md0 --add /dev/sda5

/sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6
/sbin/mdadm /dev/md1 --add /dev/sda6
Helped me too.

Thanks
 
Old 07-28-2009, 01:02 PM   #10
icy-flame
LQ Newbie
 
Registered: May 2003
Distribution: RH9
Posts: 9

Rep: Reputation: 0
Thumbs up

Quote:
Originally Posted by nicoechaniz View Post
Helped me too.

Thanks
Yup, another happy customer here.

Also did a SMART test just to make sure things are ok:

smartctl -t long /dev/sda
smartctl -l selftest /dev/hda
 
Old 09-12-2009, 07:57 PM   #11
emgee3
LQ Newbie
 
Registered: May 2009
Posts: 14

Rep: Reputation: 1
Quote:
Originally Posted by macemoneta View Post
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:

/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5
/sbin/mdadm /dev/md0 --add /dev/sda5

/sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6
/sbin/mdadm /dev/md1 --add /dev/sda6
This fixed me too! Thanks!
 
Old 11-26-2009, 12:28 PM   #12
j0inty
LQ Newbie
 
Registered: Nov 2009
Posts: 1

Rep: Reputation: 0
Hi,

today I ran into the same problem and the post helped me to solved out the problem.

Best Thanks

regards
j0inty

Code:
cicero ~ # dmesg
md: created md0 
[    6.590611] md: bind<hde1>
[    6.590699] md: bind<hdf1>
[    6.590782] md: running: <hdf1><hde1>           
[    6.590989] md: kicking non-fresh hdf1 from array!
[    6.591071] md: unbind<hdf1>
[    6.591167] md: export_rdev(hdf1)


cicero ~ # mdadm /dev/md0 --add /dev/hdf1
mdadm: re-added /dev/hdf1                
cicero ~ # mdadm --detail /dev/md0       
/dev/md0:                                
        Version : 0.90                   
  Creation Time : Mon Jul  7 15:10:27 2008
     Raid Level : raid1                   
     Array Size : 58613056 (55.90 GiB 60.02 GB)
  Used Dev Size : 58613056 (55.90 GiB 60.02 GB)
   Raid Devices : 2                            
  Total Devices : 2                            
Preferred Minor : 0                            
    Persistence : Superblock is persistent     

    Update Time : Thu Nov 26 19:10:52 2009
          State : clean, degraded, recovering
 Active Devices : 1                          
Working Devices : 2                          
 Failed Devices : 0                          
  Spare Devices : 1                          

 Rebuild Status : 0% complete

           UUID : b16e8306:2d6c8eb3:814001e4:9408904d
         Events : 0.184                              

    Number   Major   Minor   RaidDevice State
       2      33       65        0      spare rebuilding   /dev/hdf1
       1      33        1        1      active sync writemostly   /dev/hde1
cicero ~ # mount /datapool/
cicero ~ # cat /proc/mdstat
Personalities : [raid0] [raid1]
md3 : active raid1 hdb1[1] hda1[0]
      136448 blocks [2/2] [UU]
md1 : active raid0 hdb2[1] hda2[0]
      2007936 blocks 64k chunks
md2 : active raid1 hdb3[1] hda3[0]
      18860224 blocks [2/2] [UU]
md0 : active raid1 hdf1[2] hde1[1](W)
      58613056 blocks [2/1] [_U]
      [========>............]  recovery = 44.8% (26304832/58613056) finish=21.6min speed=24902K/sec
unused devices: <none>
 
Old 02-15-2010, 10:13 PM   #13
dfwrider
Member
 
Registered: Feb 2010
Location: San Antonio, Texas, USA
Distribution: Slackware
Posts: 43

Rep: Reputation: 34
Thumbs up

This thread just saved my bacon. I followed klausbreuer's variation, because my array was raid5 and so was his.

So what happened was, a controller went offline taking 2 drives with it (out of a 6 drive raid5 array...ouch!)

I got the dreaded "kicking non-fresh" message for those 2 drives in the logs (upon reboot).

I KNEW at the time of the controller going down, that there was no data being written to the array, as the array is just storage, and does not contain the operating system ... so I thought maybe I had a chance.

So I added the two dropped members like klausbreur posted (which is based off what macemoneta posted):


mdadm /dev/md0 --add /dev/hdg1

(console gave me a "re-added" message)

mdadm /dev/md0 --add /dev/hde1

(console gave me another "re-added message)

Then finally I did a:

mdadm -R /dev/md0

No errors, so I did a "cat /proc/mdstat" , which showed the usual 6 drives up with the: [UUUUUU]

I then mounted the array in it's usual spot and it was all there.

Many thanks to macemoneta for providing a solid answer to build off of, and many thanks to klausbreur for posting his version...
 
Old 12-13-2010, 09:54 AM   #14
delix
LQ Newbie
 
Registered: Dec 2010
Posts: 1

Rep: Reputation: 0
Smile It helps me too =)

After I setted up RAID-1 I begin testing its. I halted server and pluged out 1st SATA-driver. Then, I on power and system load well. After that, I did same thing with second SATA and evrethig OK. Then I plugged back sacond SATA and starts. On startup kernel warnigs that some md starts just with 1 driver.
So when I do dmesg I get:

Code:
leopard:~# dmesg                                                           
[...]
[    6.785280] md: raid1 personality registered for level 1
[    6.794486] md: md0 stopped.
[    6.807811] md: bind
[    6.808026] md: bind
[    6.821761] raid1: raid set md0 active with 2 out of 2 mirrors
[    6.822465] md: md1 stopped.
[    6.885858] md: bind
[    6.886056] md: bind
[    6.900995] raid1: raid set md1 active with 2 out of 2 mirrors
[    6.901313] md: md2 stopped.
[    6.933030] md: bind
[    6.933224] md: bind
[    6.933246] md: kicking non-fresh sdb3 from array!
[    6.933251] md: unbind
[    6.933259] md: export_rdev(sdb3)
[    6.946926] raid1: raid set md2 active with 1 out of 2 mirrors
[    6.947240] md: md3 stopped.
[    6.958693] md: bind
[    6.958897] md: bind
[    6.958932] md: kicking non-fresh sdb5 from array!
[    6.958937] md: unbind
[    6.958944] md: export_rdev(sdb5)
[    6.975326] raid1: raid set md3 active with 1 out of 2 mirrors
[    6.975642] md: md4 stopped.
[    6.986263] md: bind
[    6.986473] md: bind
[    6.986498] md: kicking non-fresh sdb6 from array!
[    6.986504] md: unbind
[    6.986511] md: export_rdev(sdb6)
[    7.009305] raid1: raid set md4 active with 1 out of 2 mirrors
[    7.009620] md: md5 stopped.
[    7.075068] md: bind
[    7.075359] md: bind
[    7.089303] raid1: raid set md5 active with 2 out of 2 mirrors
[...]
To fix it I didn't do
--fail --remove
In my case just do
--add

Code:
leopard:~# mdadm /dev/md3  --add /dev/sdb5
Code:
leopard:~# cat /proc/mdstat
Personalities : [raid1] 
md5 : active raid1 sda7[0] sdb7[1]
      3927744 blocks [2/2] [UU]
      
md4 : active raid1 sda6[0]
      4883648 blocks [2/1] [U_]
      
md3 : active raid1 sdb5[2] sda5[0]
      55568704 blocks [2/1] [U_]
      [=>...................]  recovery =  5.9% (3317568/55568704) finish=15.4min speed=56273K/sec
      
md2 : active raid1 sdb3[1] sda3[0]
      9767424 blocks [2/2] [UU]
      
md1 : active (auto-read-only) raid1 sda2[0] sdb2[1]
      3903680 blocks [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      96256 blocks [2/2] [UU]
      
unused devices: <none>

Thanx!!!
 
Old 07-30-2012, 03:49 AM   #15
zidz
LQ Newbie
 
Registered: Sep 2004
Distribution: Gentoo
Posts: 4

Rep: Reputation: 0
Thank you!
This still helps several years after thread started ;-)
 
  


Reply

Tags
mdadm, raid, raid1, sync



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Router keeps kicking me off... Akiva Linux - Networking 7 07-06-2005 10:28 PM
md: kicking non-fresh hde2 from array! baslemmens Linux - Software 1 09-29-2004 04:03 PM
Kicking Users coindood Linux - General 2 03-10-2004 08:25 AM
fresh install (fresh headache) powadha Slackware 2 03-06-2004 01:03 PM
How to Kicking Gnome off? elluva Debian 1 01-18-2004 10:23 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 08:50 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration