LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 05-02-2015, 08:32 PM   #1
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Rep: Reputation: 264Reputation: 264Reputation: 264
Issues with RAID- creating as /dev/md127 instead of what's in the config


Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1
As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>
I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127
And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5
I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?
 
Old 05-02-2015, 08:46 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,610

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
As the kids say "Too Much Information!". Change you mdadm.conf ARRAY line to:

Code:
ARRAY /dev/md0 UUID=032e4ab2:53ac5db8:98806abd:420716a5
 
Old 05-02-2015, 09:40 PM   #3
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Original Poster
Rep: Reputation: 264Reputation: 264Reputation: 264
Thanks for the reply!

Unfortunately, that doesn't seem to do the trick...
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

#ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md0 UUID=032e4ab2:53ac5db8:98806abd:420716a5 

root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdb1[0] sdc1[1]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>
root@maples-server:~# ls /dev/md* -l
brw-rw---- 1 root disk 9, 127 May  2 21:35 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 21:35 maples-server:0 -> ../md127
One other thing that might or might not mean anything: The original RAID was created with the Debian installer when I first installed the system. I don't know if there's a difference or not, but I thought I would throw it out there just in case.
 
Old 05-03-2015, 08:32 AM   #4
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,610

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
Have you changed the hostname since you created the array? Check the output from

Code:
mdadm --detail /dev/md127
 
Old 05-03-2015, 08:53 AM   #5
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Original Poster
Rep: Reputation: 264Reputation: 264Reputation: 264
No, the hostname is still the same.

Code:
root@maples-server:~# mdadm --detail /dev/md127 
/dev/md127:
        Version : 1.2
  Creation Time : Sat May  2 20:02:54 2015
     Raid Level : raid0
     Array Size : 488016896 (465.41 GiB 499.73 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May  2 20:02:54 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : maples-server:0  (local to host maples-server)
           UUID : 032e4ab2:53ac5db8:98806abd:420716a5
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
It's been "maples-server" since day one.
 
Old 05-03-2015, 09:12 AM   #6
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,610

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
Check the log from the reboot

Code:
dmesg |grep md
 
Old 05-03-2015, 09:43 AM   #7
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Original Poster
Rep: Reputation: 264Reputation: 264Reputation: 264
There is a lot of non-mdadm output, I've bolded (hopefully) all of the relevant parts. I've also "filled in" one line that applied to another mdadm line that just didn't happen to contain "md"

Code:
[    0.000000] Linux version 3.16.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt9-3~deb8u1 (2015-04-24)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 root=UUID=3b4354d8-2380-4c70-9f9f-d8c1f5de5233 ro quiet
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 root=UUID=3b4354d8-2380-4c70-9f9f-d8c1f5de5233 ro quiet
[    0.535937] AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>
[    0.600969] systemd-udevd[63]: starting version 215
[    0.601363] random: systemd-udevd urandom read with 1 bits of entropy available
[    0.636154] usb usb1: Manufacturer: Linux 3.16.0-4-amd64 ehci_hcd
[    0.636925] usb usb3: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.637669] usb usb4: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.638639] usb usb5: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.639510] usb usb6: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.640243] usb usb2: Manufacturer: Linux 3.16.0-4-amd64 xhci_hcd
[    0.640582] usb usb7: Manufacturer: Linux 3.16.0-4-amd64 xhci_hcd
[    0.647577] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0x40b0 irq 14
[    0.647579] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0x40b8 irq 15
[    1.224284] md: bind<sdb1>
[    1.249833] md: bind<sdc1>
[    1.251381] md: raid0 personality registered for level 0
[    1.251611] md/raid0:md127: md_size is 976033792 sectors.
[    1.251614] md: RAID0 configuration for md127 - 1 zone
[    1.251615] md: zone0=[sdb1/sdc1]
[    1.251619]       zone-offset=         0KB, device-offset=         0KB, size= 488016896KB
[    1.251629] md127: detected capacity change from 0 to 499729301504
[    1.255858]  md127: unknown partition table
[    5.172883] systemd-udevd[200]: starting version 215
[    9.427196] EXT4-fs (md127): mounting ext3 file system using the ext4 subsystem
[    9.446980] EXT4-fs (md127): mounted filesystem with ordered data mode. Opts: (null)
[    9.927294] systemd-journald[215]: Received request to flush runtime journal from PID 1

Last edited by maples; 05-03-2015 at 09:48 AM.
 
Old 05-03-2015, 10:09 AM   #8
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,610

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
The raid is being automatically assembled by the kernel, which is why mdadm.conf doesn't matter. I don't know why it isn't using the ":0" part of the name to make it md0.
 
Old 05-03-2015, 03:48 PM   #9
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Original Poster
Rep: Reputation: 264Reputation: 264Reputation: 264
So there's no way to make the kernel give it a certain name?
 
Old 05-04-2015, 09:14 AM   #10
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,610

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
I suspect you can update the initramfs with the correct mdadm.conf to fix this. See this post:

http://unix.stackexchange.com/questi...bled-at-bootup
 
Old 05-04-2015, 04:11 PM   #11
maples
Member
 
Registered: Oct 2013
Location: IN, USA
Distribution: Arch, Debian Jessie
Posts: 810

Original Poster
Rep: Reputation: 264Reputation: 264Reputation: 264
Quote:
Originally Posted by smallpond View Post
I suspect you can update the initramfs with the correct mdadm.conf to fix this. See this post:

http://unix.stackexchange.com/questi...bled-at-bootup
That did it!

For me, all I had to do was run
Code:
update-initramfs -u
and reboot, and now it's working the way it should!

Thank you!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Degraded Raid 1, was md0 now md127.. need help jcmorse563 Linux - Server 5 03-25-2014 10:48 AM
Linux raid / md127 pika Linux - Server 2 11-03-2013 01:33 PM
md127 Linux software raid drManhattan Linux - Server 2 11-03-2013 01:30 PM
Install on raid setup and /dev/md127 polch Slackware - Installation 0 10-02-2012 05:30 AM
when i use 'mdadm -S /dev/md127', it produce kernel event with no stop wwyyxx26 Linux - Kernel 1 05-01-2011 06:26 PM


All times are GMT -5. The time now is 10:47 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration