Sorry to bump an ancient thread, but I came across this in searching and though I'd add some keywords for future people searching.
Longbow0, the link you posted was dead on. That was the problem.
Short version: For some reason, the package "dmraid" causes certain block devices (which may or may not be or have been used in a fakeraid) to not be able to have their superblock wiped or be added to a new mdadm raid device.
Here's what I saw:
Code:
fdisk -l | grep GB | grep 3000 (Show my 3TB disks)
Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdn: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdj: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdk: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdm: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdo: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdl: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdi: 3000.6 GB, 3000592982016 bytes
Disk /dev/sdh: 3000.6 GB, 3000592982016 bytes
Disk /dev/mapper/ddf1_49424d202020202010000079101403b2437f272b36611111: 3000.0 GB, 2999999004672 bytes
Disk /dev/mapper/ddf1_49424d202020202010000079101403b2437f272b90111111: 3000.0 GB, 2999999004672 bytes
So, what the heck are those /dev/mapper/ddf1_49xxx devices???
Throw some blocks on them with "dd" and they reveal themselves as /dev/sde and /dev/sdl.
Code:
[root@stack ~]# mdadm --zero-superblock /dev/sde
mdadm: Couldn't open /dev/sde for write - not zeroing
That's odd, none of my other 3TB disks did that. WTF? Lets try to make the raid anyway:
Code:
[root@stack ~]# mdadm --create /dev/md0 --level=6 --raid-devices=15 /dev/sd{a,b,c,d,e,f,g,h,i,j,k,l,m,n,o}
mdadm: super1.x cannot open /dev/sde: Device or resource busy
mdadm: /dev/sde is not suitable for this array.
mdadm: super1.x cannot open /dev/sdl: Device or resource busy
mdadm: /dev/sdl is not suitable for this array.
mdadm: create aborted
Ehrmm... nope.
Code:
[root@stack ~]# dd if=/dev/zero bs=1024 count=10M conv=fsync of=/dev/sde
^C134401+0 records in
134401+0 records out
137626624 bytes (138 MB) copied, 1.59491 s, 86.3 MB/s
I can put blocks on it just fine, so what's the problem? No, it's not mounted either. None of my software raid devices are mounted anywhere, they are presented as raw blocks to VMs that are not even powered on at the moment.
Code:
[root@stack ~]# yum erase dmraid
.....
Removed:
dmraid.x86_64 0:1.0.0.rc16-11.el6
Dependency Removed:
dmraid-events.x86_64 0:1.0.0.rc16-11.el6
Complete!
reboot.
Works like a champ!
Code:
[root@stack ~]# mdadm --create /dev/md0 --level=6 --raid-devices=15 /dev/sd{a,b,c,d,e,f,g,h,i,j,k,l,m,n,o}
[root@stack ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
38091755520 blocks super 1.2 level 6, 512k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
[>....................] resync = 2.1% (63183872/2930135040) finish=513.4min speed=95670K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk
unused devices: <none>