I have an old server with 2 x 500G in sw raid1, slackware 12.1, on a promise PCI card and a new server with 2 x 1T SATA in sw raid1, slackware 14.
I have moved the disks and promise card from the old server to the new server.
Once booted the they took the place of the original raid array on the new server.
How can it be that the mdadm.conf can be used in combination with other disks ?
How can I take care that, while retaining the data, I have an additional raid array ?
It depends on how the machine recognizes the disks. I assume your boot partition is RAIDED as well.
What I think that happens is that your Promise attached disks are recognized as sda/sdb. That is something you cannot control easily, it is taken care of by initramfs. No matter what you specify in udev or related tools. initramfs recognizes the disks, assembles the array and continue booting.
The old mdadm.conf is recognized by mdadm in initramfs and hence being assembled.
I assume your new disks are assembled as well and visible?
[SOLVED] move raid
My new disks where note visible.
The old disks showed in the new server as "Linux Raid Autodetect" type while executing fdisk.
The new drives do not and have been detected as fs type 83.
I stopped all raid arrays and umounted them.
The I ran
> mdadm --assemble --scan
and used that output to create a new mdadm.conf
That seemed to work.
|All times are GMT -5. The time now is 09:37 AM.|