I found a workaround of sorts. It looks like this is related to a 9.04 bug (
https://bugs.launchpad.net/ubuntu/+s...nux/+bug/27037) and the loopback workaround brings back the array. It is not clear how I will handle this long term.
Note: before using this technique, I used gparted to tag the partitions as "raid". They disappeared again on reboot, so I had to do it again. I am not sure how this is going to work out long-term
Note: I suspect some of this is related to the embedded "HOMEHOST" that is written into the RAID metadata on the paritions. The server was misnamed when first built and the name was changed later (cerebus -> cerberus) and the old name has surfaced in the name of a phanton device reported by gparted - /dev/mapper/jmicron_cerebus_root
----------------
Good day, experts!
I am taking a stab that the Server forum is correct for this post.
I have a mythbuntu 9.10 system that I have upgraded from 8.10 to 9.04 to 9.10 in the last 2 days. I am on my way to 10.x, but need to make sure it works after every step.
The basic problem is that in its current incarnation, it is not recognizing the underlying partitions for one of the RAID devices, and therefore not happy.
As a 8.10 system
I had 2 raid devices:
/dev/md16 -> /dev/sda5 and /dev/sdb5
/dev/md21 -> /dev/sdc1 and /dev/sdd1
/etc/fstab looked like this (in part):
/dev/md16 /var/lib xfs defaults 0 2
/dev/md21 /u02 xfsi defaults 0 2
After upgrading to 9.04
- both raid devices failed to load, so I enetually figured out run this command:
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
- this added these lines to /etc/mdadm/mdadm.conf
ARRAY /dev/md16 level=raid1 num-devices=2 UUID=1664eda9:d2695350:7635e7b7:75a625cd
ARRAY /dev/md21 level=raid1 num-devices=2 UUID=8709c430:7a9eb6d3:7635e7b7:75a625cd
- a reboot later and they were back in business
- /dev/sd16 was half busted, but it rebuilt fine when I ran:
mdadm /dev/md16 -a /dev/sdb6
- note - the file system type worked with xfs, but did not recognize xfsi under 9.04
After the upgrade to 9.10:
- /dev/md16 is very happy
- /dev/md21 is nowhere to be seen
>> here are the partitions that were part of the original RAID 1:
root@cerberus:~# fdisk -l /dev/sdc /dev/sdd
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0001b2c7
Device Boot Start End Blocks Id System
/dev/sdc1 1 121601 976760001 83 Linux
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0007f7aa
Device Boot Start End Blocks Id System
/dev/sdd1 1 121601 976760001 83 Linux
>> Here are the devices - or lack thereof:
root@cerberus:~# ls -l /dev/sd[cd]*
brw-rw---- 1 root disk 8, 32 2011-03-03 20:21 /dev/sdc
brw-rw---- 1 root disk 8, 48 2011-03-03 20:21 /dev/sdd
>> I tried creating the devices manually with mknod, but they did not work as file system handles, and were wiped out on reboot:
mknod --mode=660 /dev/sdc1 b 8 33
mknod --mode=660 /dev/sdd1 b 8 49
>> I tried using mdadm to build a 1-way RAID 1 with the disk device. (I was working with the single disk only in case it scragged the drive - this way I had an untouched copy)
root@cerberus:~# mdadm --create /dev/md21 --force --level=1 --raid-devices=1 /dev/sdc
mdadm: Cannot open /dev/sdc: Device or resource busy
mdadm: create aborted
note - this drive is not in use as swap or otherwise mounted that I can tell
>> I am a little fuzzy on when I did this, but adding these lines to /etc/initramfs-tools/modules and rebuilding (update-initramfs -u) helped get things at some point. I *think* this was required to get RAID to work at all after upgrading to 9.10. Here are the lines:
raid1
raid456
the raid456 line does not seem to do anything - I took it out with no change
>> running gparted lists the partition, but I get a warning triangle with an info box that says "fatal error - couldn't initialize XFS library"
>> a few more data points:
root@cerberus:~# mdadm --examine --scan
ARRAY /dev/md16 level=raid1 num-devices=2 UUID=1664eda9:d2695350:7635e7b7:75a625cd
root@cerberus:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md16 : active raid1 sdb6[1] sda6[0]
964044480 blocks [2/2] [UU]
unused devices: <none>
root@cerberus:~# mdadm --examine /dev/sdc1
mdadm: cannot open /dev/sdc1: No such file or directory
root@cerberus:~# mdadm --examine /dev/sdc
mdadm: No md superblock detected on /dev/sdc.
root@cerberus:~# cat /proc/partitions
major minor #blocks name
8 0 976762584 sda
8 1 11719386 sda1
8 2 1 sda2
8 5 995998 sda5
8 6 964044553 sda6
8 16 976762584 sdb
8 17 11719386 sdb1
8 18 1 sdb2
8 21 995998 sdb5
8 22 964044553 sdb6
8 32 976762584 sdc
8 48 976762584 sdd
9 16 964044480 md16
252 1 976584704 dm-1
I am a bit puzzled by this last line here; the size lines up with my lost md device, but there is no /dev/md-1 (it was md-0 earlier). There is a device withthis major/minor here:
brw-rw---- 1 root disk 252, 1 2011-03-03 20:21 /dev/mapper/jmicron_cerebus_root
... and it shows up in gparted as an unallocated partition of pretty-close-to-drive size
>> the HD controller is built-in to the mother board - some sort of jmicron thing. There are 4 drives - all 1 TB SATA IIRC.
I don't *really* want to repartition the drive as there is a small amount of data loss between recent backups and what is on the drive, plus it would take me 2 days to move the data back.
Many thanks,
dKeith