LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   move raid (https://www.linuxquestions.org/questions/slackware-14/move-raid-4175453300/)

the_zone 03-08-2013 01:25 PM

move raid
 
I have an old server with 2 x 500G in sw raid1, slackware 12.1, on a promise PCI card and a new server with 2 x 1T SATA in sw raid1, slackware 14.
I have moved the disks and promise card from the old server to the new server.
Once booted the they took the place of the original raid array on the new server.

How can it be that the mdadm.conf can be used in combination with other disks ?

How can I take care that, while retaining the data, I have an additional raid array ?

grtz

ST

Ser Olmy 03-08-2013 04:47 PM

Quote:

Originally Posted by the_zone (Post 4907563)
Once booted the they took the place of the original raid array on the new server.

Do you mean that the server booted the old Slackware 12.1 installation on the array attached to the Promise controller? If so, that's a BIOS issue. Look for the Boot Order settings.

jlinkels 03-08-2013 05:31 PM

It depends on how the machine recognizes the disks. I assume your boot partition is RAIDED as well.

What I think that happens is that your Promise attached disks are recognized as sda/sdb. That is something you cannot control easily, it is taken care of by initramfs. No matter what you specify in udev or related tools. initramfs recognizes the disks, assembles the array and continue booting.

The old mdadm.conf is recognized by mdadm in initramfs and hence being assembled.

I assume your new disks are assembled as well and visible?

jlinkels

the_zone 03-12-2013 07:47 AM

[SOLVED] move raid
 
@Ser Olmy
Quote:

Do you me an that the server booted the old Slackware 12.1 installation on the array attached to the Promise controller? If so, that's a BIOS issue. Look for the Boot Order settings
The drives are not visible in the BIOS

@jlinkels
My new disks where note visible.

The old disks showed in the new server as "Linux Raid Autodetect" type while executing fdisk.
The new drives do not and have been detected as fs type 83.

I stopped all raid arrays and umounted them.
The I ran
> mdadm --assemble --scan

and used that output to create a new mdadm.conf

That seemed to work.


All times are GMT -5. The time now is 11:59 PM.