LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Issues with RAID- creating as /dev/md127 instead of what's in the config (https://www.linuxquestions.org/questions/linux-newbie-8/issues-with-raid-creating-as-dev-md127-instead-of-whats-in-the-config-4175541446/)

maples 05-02-2015 07:32 PM

Issues with RAID- creating as /dev/md127 instead of what's in the config
 
Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:

root@maples-server:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:

root@maples-server:~# cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
     
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:

root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:

root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

smallpond 05-02-2015 07:46 PM

As the kids say "Too Much Information!". Change you mdadm.conf ARRAY line to:

Code:

ARRAY /dev/md0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

maples 05-02-2015 08:40 PM

Thanks for the reply!

Unfortunately, that doesn't seem to do the trick...
Code:

root@maples-server:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

#ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

root@maples-server:~# cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 sdb1[0] sdc1[1]
      488016896 blocks super 1.2 512k chunks
     
unused devices: <none>
root@maples-server:~# ls /dev/md* -l
brw-rw---- 1 root disk 9, 127 May  2 21:35 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 21:35 maples-server:0 -> ../md127

One other thing that might or might not mean anything: The original RAID was created with the Debian installer when I first installed the system. I don't know if there's a difference or not, but I thought I would throw it out there just in case.

smallpond 05-03-2015 07:32 AM

Have you changed the hostname since you created the array? Check the output from

Code:

mdadm --detail /dev/md127

maples 05-03-2015 07:53 AM

No, the hostname is still the same.

Code:

root@maples-server:~# mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Sat May  2 20:02:54 2015
    Raid Level : raid0
    Array Size : 488016896 (465.41 GiB 499.73 GB)
  Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May  2 20:02:54 2015
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

    Chunk Size : 512K

          Name : maples-server:0  (local to host maples-server)
          UUID : 032e4ab2:53ac5db8:98806abd:420716a5
        Events : 0

    Number  Major  Minor  RaidDevice State
      0      8      17        0      active sync  /dev/sdb1
      1      8      33        1      active sync  /dev/sdc1

It's been "maples-server" since day one.

smallpond 05-03-2015 08:12 AM

Check the log from the reboot

Code:

dmesg |grep md

maples 05-03-2015 08:43 AM

There is a lot of non-mdadm output, I've bolded (hopefully) all of the relevant parts. I've also "filled in" one line that applied to another mdadm line that just didn't happen to contain "md"

Code:

[    0.000000] Linux version 3.16.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt9-3~deb8u1 (2015-04-24)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 root=UUID=3b4354d8-2380-4c70-9f9f-d8c1f5de5233 ro quiet
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.16.0-4-amd64 root=UUID=3b4354d8-2380-4c70-9f9f-d8c1f5de5233 ro quiet
[    0.535937] AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>
[    0.600969] systemd-udevd[63]: starting version 215
[    0.601363] random: systemd-udevd urandom read with 1 bits of entropy available
[    0.636154] usb usb1: Manufacturer: Linux 3.16.0-4-amd64 ehci_hcd
[    0.636925] usb usb3: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.637669] usb usb4: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.638639] usb usb5: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.639510] usb usb6: Manufacturer: Linux 3.16.0-4-amd64 uhci_hcd
[    0.640243] usb usb2: Manufacturer: Linux 3.16.0-4-amd64 xhci_hcd
[    0.640582] usb usb7: Manufacturer: Linux 3.16.0-4-amd64 xhci_hcd
[    0.647577] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0x40b0 irq 14
[    0.647579] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0x40b8 irq 15
[    1.224284] md: bind<sdb1>
[    1.249833] md: bind<sdc1>
[    1.251381] md: raid0 personality registered for level 0
[    1.251611] md/raid0:md127: md_size is 976033792 sectors.
[    1.251614] md: RAID0 configuration for md127 - 1 zone
[    1.251615] md: zone0=[sdb1/sdc1]
[    1.251619]      zone-offset=        0KB, device-offset=        0KB, size= 488016896KB
[    1.251629] md127: detected capacity change from 0 to 499729301504
[    1.255858]  md127: unknown partition table

[    5.172883] systemd-udevd[200]: starting version 215
[    9.427196] EXT4-fs (md127): mounting ext3 file system using the ext4 subsystem
[    9.446980] EXT4-fs (md127): mounted filesystem with ordered data mode. Opts: (null)

[    9.927294] systemd-journald[215]: Received request to flush runtime journal from PID 1


smallpond 05-03-2015 09:09 AM

The raid is being automatically assembled by the kernel, which is why mdadm.conf doesn't matter. I don't know why it isn't using the ":0" part of the name to make it md0.

maples 05-03-2015 02:48 PM

So there's no way to make the kernel give it a certain name?

smallpond 05-04-2015 08:14 AM

I suspect you can update the initramfs with the correct mdadm.conf to fix this. See this post:

http://unix.stackexchange.com/questi...bled-at-bootup

maples 05-04-2015 03:11 PM

Quote:

Originally Posted by smallpond (Post 5357625)
I suspect you can update the initramfs with the correct mdadm.conf to fix this. See this post:

http://unix.stackexchange.com/questi...bled-at-bootup

That did it!

For me, all I had to do was run
Code:

update-initramfs -u
and reboot, and now it's working the way it should!

Thank you!


All times are GMT -5. The time now is 10:50 AM.