Hello,
I have a fake raid 1 on my nvidia board with 2 SATA drives.
This works well, I created the thing "dmraid -ay", formatted it using
control center and can write data to it.
But when I reboot, the system does not come up. The mapper device
node is not there.
I can login as root in single user mode.
When I run "dmraid -ay" again now, the mapper nodes appear and
the "raid" can be mounted and is ok.
Is the dmraid mapping supposed to survive a reboot, or am I supposed to
insert that statement into some boot script ?
I see the /etc/rc.d/boot.md which calls the mdrun. This is not enough,
it seems.
Sorry I have no log files (yet).
This did not happen in /var/log/messages, and dmesg is gone for now.
But it's really just that the mapper nodes are not created.
Any other experiences with nvraid are also very much welcome. Info is
very scattered with that issue.
This all on a fresh SuSE 10.2.
Some cut'n paste to finish:
Code:
# dmraid -s
*** Set
name : nvidia_eahbabee
size : 781422720
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0
# dmraid -r
/dev/sda: nvidia, "nvidia_eahbabee", mirror, ok, 781422766 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_eahbabee", mirror, ok, 781422766 sectors, data@ 0
# cat /etc/fstab
...
/dev/mapper/nvidia_eahbabee_part1 /raid ext3 acl,user_xattr 1 2
(but I had to disable that to be able to boot again...)
# dmraid -ay -v
INFO: Activating mirror RAID set "nvidia_eahbabee"
INFO: Activating partition RAID set "nvidia_eahbabee1"
# ls -l /dev/mapper/
lrwxrwxrwx 1 root root 16 7. Mär 02:52 control -> ../device-mapper
brw------- 1 root root 253, 0 7. Mär 03:23 nvidia_eahbabee
brw------- 1 root root 253, 1 7. Mär 03:23 nvidia_eahbabee1
brw------- 1 root root 253, 2 7. Mär 03:23 nvidia_eahbabee_part1
# mount /dev/mapper/nvidia_eahbabee_part1 /raid
# ls /raid
Depeche Mode lost+found
(I do own that CD, so that's okay. It's only a test anyway :-)
Thanks & Cheers, Tom.