Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back?
To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch.
Any assistance is much appreciated!
PS. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.
No doubt a dumb question, but have you tried assembling and mounting them manually at the command line with mdadm, something like mdadm -assemble -scan or even mdadm -assemble {manually add raid parameters and devices}
After your suggestion I dared running some mdadm commands and got it working. I had to do mdadm --stop first before assembling them with --assemble -v.
The only remaining problem is that I have to do this manually each time the computer is restarted. Is there any way to have them assembled automatically?
auto= This option declares to mdadm that it should try to create the device file of the array if it doesn't already exist, or exists but with the wrong device number.
So it says "try", no guarantee that it will succeed. As I suggested, try being more explicit by listing the details (number of devices, spares, raid level). See man mdadm.conf.
So it says "try", no guarantee that it will succeed. As I suggested, try being more explicit by listing the details (number of devices, spares, raid level). See man mdadm.conf.
mdadm --assemble -scan says it only finds one drive for each set.
[root@hserver /]# mdadm --assemble -scan
Code:
mdadm: /dev/md/127_0 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/holisrv3:0 assembled from 1 drive - not enough to start the array.
mdadm: No arrays found in config file or automatically
It only finds one device, and it claims there are no arrays in config file, even if they are? And what is with the stange md names?
(unless /etc/mdadm.conf is the wrong config file?)
It feels like there are some conflicting configurations somehow?
This sure looks like another config file? But...
[root@server /]# locate mdadm.conf
/etc/mdadm.conf
/usr/share/doc/mdadm/mdadm.conf-example
/usr/share/man/man5/mdadm.conf.5.lzma
Btw. "holisrv3" is the name of the computer.
Last edited by MartenH; 08-14-2010 at 02:35 PM.
Reason: Added more details
I noticed my error of stating level=5 instead of level=raid5 and have corrected it. Have not rebooted yet.
Perhaps I should give up on salvaging the current situation, simply erase all traces of the raid sets and do them all over again but manually instead of using the guide?
What say you?
(since I could assemble them manually I have removed all my important data from them).
If you create them manually, you are also responsible for adding entries to the mdadm.conf file so it would be useful to check whether what you have now works or not, if only to give you an idea. Re-creating the raid devices without knowing exactly what the entries should be like would be pointless.
# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
# mdadm --assemble -v /dev/md0 /dev/sde1 /dev/sdf1 /dev/sdi1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdi1 is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/sdf1 to /dev/md0 as 1
mdadm: added /dev/sdi1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 3 drives.
# mdadm --assemble -v /dev/md1 /dev/sda1 /dev/sdh1
mdadm: looking for devices for /dev/md1
mdadm: /dev/sda1 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdh1 is identified as a member of /dev/md1, slot 1.
mdadm: added /dev/sdh1 to /dev/md1 as 1
mdadm: added /dev/sda1 to /dev/md1 as 0
mdadm: /dev/md1 has been started with 2 drives.
At this point I can mount them and use them if I want to.
After assembling manually I get the following output:
cat /proc/mdstat
Quote:
Personalities : [raid6] [raid5] [raid4] [raid0]
md1 : active raid0 sda1[0] sdh1[1]
733493248 blocks super 1.2 512k chunks
md0 : active raid5 sde1[0] sdi1[3] sdf1[1]
78153728 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
I have used that same how-to without any problem so I am beginning to wonder whether you are not affected by a bug. Maybe it is time for a little detour. I would check with another distro.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.