LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Red Hat (http://www.linuxquestions.org/questions/red-hat-31/)
-   -   quick rh8 ide raid to sata raid (http://www.linuxquestions.org/questions/red-hat-31/quick-rh8-ide-raid-to-sata-raid-499646/)

coastdweller 11-08-2006 02:17 AM

quick rh8 ide raid to sata raid
 
I have a 3 disk 150gb IDE raid on rh8 that I need to migrate (Copy to, ghost to) a new disk array in the same machine.

My question is what issues will I have when moving to a 2 disk 750gb adaptec sata disk stripe from a 3 disk ide promise stripe.

Will it be as easy as copying the 3 disks to the two 750's and then booting off the 750 stripe?

What am I looking at tomorrow?

kstan 11-08-2006 02:36 AM

it should be ok. however you use hardware or software raid in sata and ide? easier way is
1. u boot via live cd
2. copy entire system from ide to sata.
3. after complete copy all files, using fedora core recovery cd(2,3, 4,5 or 6 up to u),
4. chroot to the '/' directory in sata
5. edit /etc/fstab and /boot/grub/menu.lst to suit new environment
6. perform grub-install in new hdd (/dev/sda or /dev/sdb or ...)
7. restart

Please don't simply remove your old raid, it is just logically and I'd never try this. remember your linux must able to detect the sata devices, and I assume you don't have any encrypted partition.

ks

coastdweller 11-08-2006 09:52 AM

I'm assuming its hardware raid, but I might be mistaken.

The promise card is more than likely not hardware raid, so I would be going from software raid to hardware raid.

Will this present a problem?

When you say "chroot" do you mean simply running the command "chroot /"?

I'm a little scared about doing this but I must do it today =)

kstan 11-08-2006 07:12 PM

If you use hardware raid, then it is transparent to your OS and the computer will consider it as a normal hard disk, probably sda or sdb.
Once you copy everything from old hdd into new sata hdd. then you done majority of thing. However, you still need to make sure the new hdd able to boot, in ide harddisk that is 1st 1024byte for boot, partition table and etc. I don't know how about the hardware raid, but i think is no much difference, and you need to make it otherwise your linux maynot boot.

Normally how we do is using command grub-install /dev/yournewhdd

i assume your sata device have only 2 partition
/dev/sdb1 /
/dev/sdb2 swap

in fc recovery disk, probably mount sata hdd array at /media/sysimage (i forgot the path, you can check using 'mount')
-perform 'chroot /media/sysimage' (then it simulate you already boot into new hard disk, definitely not 100%)
-edit /etc/fstab (remeber after chroot), make sure /, swap location is correct
-edit /boot/grub/menu.lst (maybe edit root=(hd0,1),kernel=.../dev/hda1))
-perform grub-install /dev/sdb (I consider /dev/sdb as your sata, /dev/sda as ide hdd)

after this, your system should able to boot, remember this only work if your linux support your new sata raid hardware.

coastdweller 11-12-2006 07:05 AM

kstan, I wanted to take a moment and update the thread/you on the outcome.

After realizing that I was going from software raid to software raid I realized it wouldnt work. The system had a IDE promise card which was conflicting with the SATA raid I was adding to replace the array.

Realizing this I realized that two of these servers (New parts, services) would cost about 2,000.

I convinced the client to go with all new hardware and install suse.

Ultimately the bill was 3,120 and instead of two older machines with huge disk they ended up with two new machines with huge disks for about a 1000 more.

The cost, time, copying over 440gbs between the old and new server (Times two) for them, but it was a solution.

Thank you for your help and time on this.

kstan 11-12-2006 06:45 PM

wow, the space is 1000 times biger? meaning 440gbsx1000? it equal 440Tbyes storage~ What file system you use?

coastdweller 11-13-2006 12:04 AM

=) "They wish", no I meant for "1000 more" in dollars they ended up with totally new hardware instead of using the 2 old servers still and just upgrading their disk systems.

Essentially the idea was to take 3 160gb drives and move them to two 750gb sata drives (per machine) giving them about 1397 gbs of space version about 460 I think it was.

So I quoted about 2200 for upgrading the systems, didnt work, ended up just building new machines entirely which yes gives them new machines and they can keep the old machines so they end up with:

(This is times two)
460gb X 2
1397gb X 2

.... between their two locations (They rsync this stuff nightly - incrementally)


All times are GMT -5. The time now is 07:53 AM.