LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 03-24-2014, 03:34 PM   #1
jcmorse563
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Rep: Reputation: Disabled
Degraded Raid 1, was md0 now md127.. need help


Here is the info if you need more please don't hesitate to ask...

root@bew:~# uname -a
Linux bew 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:15:33 UTC 2013 i686 i686 i386 GNU/Linux

root@bew:~# fdisk -l

Disk /dev/sda: 203.9 GB, 203928109056 bytes
255 heads, 63 sectors/track, 24792 cylinders, total 398297088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a6462

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 389302271 194650112 fd Linux raid autodetect
/dev/sda2 389304318 398295039 4495361 5 Extended
/dev/sda5 389304320 398295039 4495360 fd Linux raid autodetect

Disk /dev/sdb: 203.9 GB, 203928109056 bytes
255 heads, 63 sectors/track, 24792 cylinders, total 398297088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cd49d

Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 389302271 194650112 fd Linux raid autodetect
/dev/sdb2 389304318 398295039 4495361 5 Extended
/dev/sdb5 389304320 398295039 4495360 fd Linux raid autodetect

Disk /dev/md127: 199.3 GB, 199321649152 bytes
255 heads, 63 sectors/track, 24232 cylinders, total 389300096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000048ac

Device Boot Start End Blocks Id System
/dev/md127p1 63 385110179 192555058+ 83 Linux
/dev/md127p2 385110180 389287079 2088450 5 Extended
/dev/md127p5 385110243 389287079 2088418+ 82 Linux swap / Solaris


root@bew:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb1[1]
194650048 blocks [2/1] [_U]

No Array listed in /etc/mdadm/mdadm.conf
automatically tag new arrays as belonging to the local system
HOMEHOST <system>

I deleted it hoping to rebuild.


root@bew:~# dmraid -dtay
DEBUG: not isw at 2064645120
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 2063563264
DEBUG: not isw at 2064645120
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 2063563264
no raid disks

root@bew:~# mdadm --create /dev/md0 --name=0 --chunk=256 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: /dev/sda1 appears to be part of a raid array:
level=raid0 devices=0 ctime=Wed Dec 31 16:00:00 1969
mdadm: partition table exists on /dev/sda1 but will be lost or
meaningless after creating array
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: super1.x cannot open /dev/sdb1: Device or resource busy
mdadm: /dev/sdb1 is not suitable for this array.
mdadm: create aborted


root@bew:~# mdadm -Evvvvs
mdadm: No md superblock detected on /dev/md127p5.
/dev/md127p2:
MBR Magic : aa55
Partition[0] : 4176837 sectors at 63 (type 82)
mdadm: No md superblock detected on /dev/md127p1.
/dev/md127:
MBR Magic : aa55
Partition[0] : 385110117 sectors at 63 (type 83)
Partition[1] : 4176900 sectors at 385110180 (type 05)
mdadm: No md superblock detected on /dev/sdb5.
/dev/sdb2:
MBR Magic : aa55
Partition[0] : 8990720 sectors at 2 (type fd)
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : 63621c2f:6fa023e2:e368bf24:bd0fce41
Creation Time : Mon Jan 23 14:19:49 2012
Raid Level : raid1
Used Dev Size : 194650048 (185.63 GiB 199.32 GB)
Array Size : 194650048 (185.63 GiB 199.32 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 127

Update Time : Mon Mar 24 13:11:57 2014
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Checksum : cac6a6e5 - correct
Events : 1834362


Number Major Minor RaidDevice State
this 1 8 17 1 active sync /dev/sdb1

0 0 0 0 0 removed
1 1 8 17 1 active sync /dev/sdb1
/dev/sdb:
MBR Magic : aa55
Partition[0] : 389300224 sectors at 2048 (type fd)
Partition[1] : 8990722 sectors at 389304318 (type 05)
mdadm: No md superblock detected on /dev/sda5.
/dev/sda2:
MBR Magic : aa55
Partition[0] : 8990720 sectors at 2 (type fd)
/dev/sda1:
MBR Magic : aa55
Partition[0] : 385110117 sectors at 63 (type 83)
Partition[1] : 4176900 sectors at 385110180 (type 05)
/dev/sda:
MBR Magic : aa55
Partition[0] : 389300224 sectors at 2048 (type fd)
Partition[1] : 8990722 sectors at 389304318 (type 05)


root@bew:~# mdadm --examine /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 389300224 sectors at 2048 (type fd)
Partition[1] : 8990722 sectors at 389304318 (type 05)
root@bew:~#

root@bew:~# mdadm --examine /dev/sda
/dev/sda:
MBR Magic : aa55
Partition[0] : 389300224 sectors at 2048 (type fd)
Partition[1] : 8990722 sectors at 389304318 (type 05)

root@bew:~# mdadm --examine /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

root@bew:~# mdadm --examine /dev/md127
/dev/md127:
MBR Magic : aa55
Partition[0] : 385110117 sectors at 63 (type 83)
Partition[1] : 4176900 sectors at 385110180 (type 05)

root@bew:~# mdadm --assemble --run --force /dev/md0 /dev/sda1
mdadm: Cannot assemble mbr metadata on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted
as you can see above it says there is a superblock and then here that there isn't.

root@bew:~# cat /proc/mdstat personalities
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb1[1]
194650048 blocks [2/1] [_U]

unused devices: <none>
cat: personalities: No such file or directory

root@bew:~# umount --force /dev/md127p1
umount2: Device or resource busy
umount: /: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
umount2: Device or resource busy

I found the system off, having crashed over night, the only way it boots is if i use the "boot in degraded" after selecting recovery mode from boot menu. I will not boot into the desktop either. line command is all we got. I first i just tried to rename the md127 back to md0, but always came back with "unable to gain exclusive access" , then i tried to add, but always got the busy message, at one time i edited the adadm.conf with md0 instead of the md127, and then rebuilt the kernel, no help, still boots into md127 though the conf had md0. at one point i recieved an error message that stats that the magic is different and they are, though i don't know how that happen, one is aa55 and the other is a mixed number and letter, that seems to be common in the forums I have visited, I have researched for days now with no luck, please i need help with this.. thanks.
 
Old 03-24-2014, 05:21 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 & 7
Posts: 3,121

Rep: Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836
Let's figure out what this RAID is. Post the full output of:
Code:
mdadm --detail /dev/md127
That should tell you what components it thinks it is made of.
 
Old 03-24-2014, 05:29 PM   #3
jcmorse563
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
Thanks for your help, i've tried all i know... here is the output.. it should be md0, but it just wont let me do any thing.
root@bew:~# mdadm --detail /dev/md127
/dev/md127:
Version : 0.90
Creation Time : Mon Jan 23 14:19:49 2012
Raid Level : raid1
Array Size : 194650048 (185.63 GiB 199.32 GB)
Used Dev Size : 194650048 (185.63 GiB 199.32 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 127
Persistence : Superblock is persistent

Update Time : Mon Mar 24 15:29:43 2014
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 63621c2f:6fa023e2:e368bf24:bd0fce41
Events : 0.1842490

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1



also when i do a
root@bew:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 UUID=63621c2f:6fa023e2:e368bf24:bd0fce41

# This file was auto-generated on Mon, 24 Mar 2014 08:26:14 -0700
# by mkconf $Id$
which says md0 but if i just do
root@bew:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb1[1]
194650048 blocks [2/1] [_U]

unused devices: <none>
its say's md127.. im confused...lol
 
Old 03-24-2014, 05:31 PM   #4
jcmorse563
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
sorry about the poor formatting..
if i just do the mdstat like this it says md127

root@bew:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb1[1]
194650048 blocks [2/1] [_U]

unused devices: <none>
 
Old 03-25-2014, 08:57 AM   #5
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 & 7
Posts: 3,121

Rep: Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836Reputation: 836
I'm also confused, but read this post:

http://aubreykloppers.wordpress.com/...dadm-devmd127/
 
1 members found this post helpful.
Old 03-25-2014, 09:48 AM   #6
jcmorse563
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
mark this solved...

mark this fixed. what i did..

first i did alot of research, i done a repair on the superblock, however, i don't think this was needed. I edited /dev/mdadm/mdadm.conf to say md0 instead of md127. I failed the drive that was reporting bad, then ran "update-initramfs -u" then rebooted, be careful not to boot from the drive that failed. the first time i did this it didn't take for some reason, had to repeat the process. maybe i didnt save the conf file or something.. but if after reboot the cat /proc/mdstat reports md0 instead of md127 it worked. after that i just readded the failed drive back to the array with mdadm --manage --add /dev/md0 /dev/sda1 remembering that its the partition that is part of the array not the drive itself. after the add i ran "cat /proc/mdstat" agian to see if it was syncing, it was, i waited for the sync to complete.. hours later i rebooted and all fixed.. the problem i was having was i was rebuilding the kernel instead of just doing an update i think.. also as long as "cat /etc/mdadm/mdadm.conf" is reporting ARRAY /dev/md0 UUID=63621c2f:6fa023e2:e368bf24:bd0fce41 and not ARRAY /dev/md127 UUID=63621c2f:6fa023e2:e368bf24:bd0fce41 ur ready for the "update-initramfs -u" and then the reboot and addition of the failed drive.. this may work without failing the drive, but it was just something that i did trying to fix this problem... the md127 seems to be a default md number now any time that the array goes to degraded mode, but it's easier to deal with than thought. although for me it took day's.. check out the link about, wish i would of found that days ago.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID degraded, partition missing from md0 reano Linux - Hardware 68 12-10-2013 04:35 PM
Linux raid / md127 pika Linux - Server 2 11-03-2013 12:33 PM
md127 Linux software raid drManhattan Linux - Server 2 11-03-2013 12:30 PM
Install on raid setup and /dev/md127 polch Slackware - Installation 0 10-02-2012 04:30 AM
RAID1 issue related to Degraded mode management, md: md0 still in use. anuragccsu Linux - Kernel 0 01-18-2010 11:25 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 05:31 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration