LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
LinkBack Search this Thread
Old 08-28-2008, 01:06 AM   #1
toshko3
LQ Newbie
 
Registered: Aug 2008
Posts: 9

Rep: Reputation: 0
Smile Soft RAID1 problems (invalid raid superblock magic)


Hi all,
I am ... let's say a sys admin, and I have this little problem every time I build a mirror with mdadm. I build it by the ubuntu install and there come the problems. The ubuntu version is 8.04.1, but it happens with 7.10 too. I am a little scared about the data on this raid because i use it everywhere. So this is the exact problem: install, configure raid1 with sda,sdb and the folowing message appears on random basis at the start up screen (this is from the syslog file but the same is shown at the start).

md: md0 stopped.
Aug 25 09:52:54 SupportivoFileserver kernel: [ 36.004675] md: invalid raid superblock magic on sda
Aug 25 09:52:54 SupportivoFileserver kernel: [ 36.004722] md: sda does not have a valid v0.90 superblock, not importing!
Aug 25 09:52:54 SupportivoFileserver kernel: [ 36.004728] md: md_import_device returned -22
Aug 25 09:52:54 SupportivoFileserver kernel: [ 36.004960] md: bind<sda2>
Aug 25 09:52:54 SupportivoFileserver kernel: [ 36.078227] md: md0 stopped.

After a huge research I didn't find any solution, except that the metadata has to be set to v0.90 so the kernel could work properly. I checked the version and it IS 0.90. I put it also in the mdadm.conf file as a declaration but with no result. I switched the motherboard, assuming it could be the chipset/driver issue, I switched the PSU but with no result. Tested all the HDDs with Seatools and no problems (also they are almost new). I have no ideas what is happening. I would be very grateful if you help me with this!
Thanks!
 
Old 08-28-2008, 03:14 AM   #2
kenoshi
Member
 
Registered: Sep 2007
Location: SF Bay Area, CA
Distribution: CentOS, SLES 10+, RHEL 3+, Debian Sarge
Posts: 159

Rep: Reputation: 32
If you run into issues with using whole drives, label your drives, and for each drive, create a partition and put all the space into that partition.

Then mirror the partitions instead.

Last edited by kenoshi; 08-28-2008 at 03:18 AM.
 
Old 08-28-2008, 07:23 AM   #3
toshko3
LQ Newbie
 
Registered: Aug 2008
Posts: 9

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by kenoshi View Post
If you run into issues with using whole drives, label your drives, and for each drive, create a partition and put all the space into that partition.

Then mirror the partitions instead.
If I understand you right, this is what the ubuntu server installer does: partition as FD, mirror them, then make file system on top of md*. I think there is no difference between ubuntu installer and manual mdadm create... cfdisk... and mkfs.reiserfs... commands. But after all this is the strange thing, that the manual built raids are out of the error logs. If you need more info please ask!
Edit: I use raid only on a "data" partition, not the whole hdd. The system is outside the raid.
root@Fileserver:~# fdisk -l /dev/sda

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x02820282

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1094 8787523+ 83 Linux
/dev/sda2 1095 19330 146480670 fd Linux raid autodetect
/dev/sda3 19331 19457 1020127+ 82 Linux swap / Solaris

root@Fileserver:~# fdisk -l /dev/sdb

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x29cd29cc

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 1094 8787523+ 83 Linux
/dev/sdb2 1095 19330 146480670 fd Linux raid autodetect
/dev/sdb3 19331 19457 1020127+ 82 Linux swap / Solaris

Last edited by toshko3; 08-28-2008 at 07:27 AM.
 
Old 08-28-2008, 11:29 PM   #4
kenoshi
Member
 
Registered: Sep 2007
Location: SF Bay Area, CA
Distribution: CentOS, SLES 10+, RHEL 3+, Debian Sarge
Posts: 159

Rep: Reputation: 32
Hmm, my bad, didn't see the reference to sda2. Problem you have usually happens to whole drives, first I see this with mirrored partitions.

Try zeroing out the last 128K of sda2, where the superblock is, and try again. See:

http://www.linuxquestions.org/questi...-drive-661308/

And zero out last 128k of sda2, not sda.

Before you do this, make sure sdb has a copy of all the data on sda, or better yet, all your data is backed up.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
SATA RAID 0 errors on bootup -- invalid raid superblock vonst Slackware 3 07-04-2006 03:55 PM
RAID PROBLEM Invalid argument, bad superblock magic in messages.log ***SOLVED*** Berkut83 Linux - Hardware 3 03-29-2006 04:22 AM
RAID PROBLEM Invalid argument, bad superblock magic in messages.log ***SOLVED*** Berkut83 LinuxQuestions.org Member Success Stories 0 03-29-2006 03:44 AM
installion problems with novell suse 9.3 - int. 14 soft raid wutangrob Suse/Novell 1 09-05-2005 08:13 AM
Soft RAID1 (mirror) rebuild GAVollink Linux - Hardware 4 04-25-2003 08:18 AM


All times are GMT -5. The time now is 04:35 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration