Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
This came in handy for me too - I had a bad shutdown recently and my array didn't come back on it's own. ...I thought I had lost a disk! (67.7% recovered and climbing - Whooooohoo!)
The RAID didn't start because the one controller was behind the other one after a power failure, thus 4/8 drives were called "non-fresh".
Therefores he array didn't start and (in my case, anyway) the --fail and --remove were not necessary (mdadm tried to start the array on 4 drives and failed, of course).
Did an --add on all four drives, kick-started the RAID via
sudo mdadm -R /dev/md0
, mounted it again:
sudo mount /dev/md0 /media/raid/
and everything was back in line. Joy! :-D
Ciao,
Klaus
PS: My request for detailed information returned a weird error message - here's the complete output:
klaus@GoLem:~$ sudo mdadm --query --detail /dev/md0
mdadm: Unknown keyword devices=/dev/sde,/dev/sda,/dev/sdb,/dev/sdg,/dev/sdh,/dev/sdf,/dev/sdd,/dev/sdc
/dev/md0:
Version : 00.90.03
Creation Time : Sat Sep 3 10:36:14 2005
Raid Level : raid5
Array Size : 1709388800 (1630.20 GiB 1750.41 GB)
Used Dev Size : 244198400 (232.89 GiB 250.06 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Jan 20 20:40:02 2008
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
This thread just saved my bacon. I followed klausbreuer's variation, because my array was raid5 and so was his.
So what happened was, a controller went offline taking 2 drives with it (out of a 6 drive raid5 array...ouch!)
I got the dreaded "kicking non-fresh" message for those 2 drives in the logs (upon reboot).
I KNEW at the time of the controller going down, that there was no data being written to the array, as the array is just storage, and does not contain the operating system ... so I thought maybe I had a chance.
So I added the two dropped members like klausbreur posted (which is based off what macemoneta posted):
mdadm /dev/md0 --add /dev/hdg1
(console gave me a "re-added" message)
mdadm /dev/md0 --add /dev/hde1
(console gave me another "re-added message)
Then finally I did a:
mdadm -R /dev/md0
No errors, so I did a "cat /proc/mdstat" , which showed the usual 6 drives up with the: [UUUUUU]
I then mounted the array in it's usual spot and it was all there.
Many thanks to macemoneta for providing a solid answer to build off of, and many thanks to klausbreur for posting his version...
After I setted up RAID-1 I begin testing its. I halted server and pluged out 1st SATA-driver. Then, I on power and system load well. After that, I did same thing with second SATA and evrethig OK. Then I plugged back sacond SATA and starts. On startup kernel warnigs that some md starts just with 1 driver.
So when I do dmesg I get:
Code:
leopard:~# dmesg
[...]
[ 6.785280] md: raid1 personality registered for level 1
[ 6.794486] md: md0 stopped.
[ 6.807811] md: bind
[ 6.808026] md: bind
[ 6.821761] raid1: raid set md0 active with 2 out of 2 mirrors
[ 6.822465] md: md1 stopped.
[ 6.885858] md: bind
[ 6.886056] md: bind
[ 6.900995] raid1: raid set md1 active with 2 out of 2 mirrors
[ 6.901313] md: md2 stopped.
[ 6.933030] md: bind
[ 6.933224] md: bind
[ 6.933246] md: kicking non-fresh sdb3 from array!
[ 6.933251] md: unbind
[ 6.933259] md: export_rdev(sdb3)
[ 6.946926] raid1: raid set md2 active with 1 out of 2 mirrors
[ 6.947240] md: md3 stopped.
[ 6.958693] md: bind
[ 6.958897] md: bind
[ 6.958932] md: kicking non-fresh sdb5 from array!
[ 6.958937] md: unbind
[ 6.958944] md: export_rdev(sdb5)
[ 6.975326] raid1: raid set md3 active with 1 out of 2 mirrors
[ 6.975642] md: md4 stopped.
[ 6.986263] md: bind
[ 6.986473] md: bind
[ 6.986498] md: kicking non-fresh sdb6 from array!
[ 6.986504] md: unbind
[ 6.986511] md: export_rdev(sdb6)
[ 7.009305] raid1: raid set md4 active with 1 out of 2 mirrors
[ 7.009620] md: md5 stopped.
[ 7.075068] md: bind
[ 7.075359] md: bind
[ 7.089303] raid1: raid set md5 active with 2 out of 2 mirrors
[...]
To fix it I didn't do
--fail --remove
In my case just do
--add
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.