LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   I'm over my head - need a cookbook for transfering system to new disks (https://www.linuxquestions.org/questions/linux-server-73/im-over-my-head-need-a-cookbook-for-transfering-system-to-new-disks-578509/)

recneps_divad 08-20-2007 04:38 PM

I'm over my head - need a cookbook for transfering system to new disks
 
I'm hoping that somebody can help me by directing me to a step-by-step
cookbook of some sort for what I'm trying to do. Here's the summary:

We've got a Dell SC1425 1U server. The system has dual 300GB SATA drives
that I've configured in a software RAID 1. It's running CentOS 4.5 and
is pretty much up to date. We've also got Plesk installed on it and
it's purpose to exist is to host websites.

We recently got this idea to replace the 300GB drives with 500GB drives.
My programmer and I thought that the best way to accomplish this would
be to use Linux Ghost to copy the contents of the drives. So we pulled
one of the 300's and replaced it with the 500. Then we ghosted the 300
to the 500. Next we replaced the 300 with the other 500 and used ghost
again to copy the data on the first 500 to the second. We rebooted the
system and it worked! Well, almost.

The problem was that Linux Ghost did a fantastic job of duplicating
the exact dimensions of the 300 drives onto the 500. So we had a system
with partitions that were only 300GB in size. Obviously we wanted to
take advantage of all that extra space. And that's where things went
wrong.

We've tried a number of things from using Linux Ghost to expand the
partition to using fdisk and dd to redefine the partitions and then
copy the data from one of the 300's using an external USB 2.0 drive.
Nothing works. Where did we go wrong?

Your assistance is much appreciated,


-- Dave

syg00 08-20-2007 05:11 PM

IMHO you would have been better using "cp -a .." (per partition), or rsynch - something like that. It would have used the target filesystem as it was defined rather than forcing the source filesystem definition on you.

Most (all ??) filesystems can be expanded o.k.- by default they will fill the parition, so no need to screw around trying to work out exact sizes. Each filesystem will have its own command.

Michaelx 08-20-2007 05:34 PM

Maybe the LiveCD of Gparted could help you to configure your partitions. ASPLinux is another word coming up here, as it allows to modify partitions with Data.

recneps_divad 08-20-2007 06:54 PM

Perhaps I didn't explain the severity...
 
Thanks for trying to help guys. When I said that I had a problem in my post, I meant that I now have an unbootable system. I should have also disclosed that although I have 30+ years of computer experience and am fairly clever, I'm still a Linux newbie and am a bit over my head. Not that I don't know about boot-blocks and all of that; it's just that I'm still learning this stuff. That's why I've decided to fall on my sword and hope that somebody can give me some link to a step-by-step resource on what to do in these situations.

Thanks again,

-- Dave

syg00 08-20-2007 09:41 PM

Quote:

Originally Posted by recneps_divad (Post 2865161)
When I said that I had a problem in my post, I meant that I now have an unbootable system.

Mmmmm - that wasn't conveyed by
Quote:

We rebooted the system and it worked! Well, almost.
I read that as you were (merely) pissed off that you couldn't use the extra 200 Gig.
Presumably you still have the 300 Gig drives. Why not simply redo the operation from the start. You know that (sorta) works. Get that working again, and tell use what filesystem type, and somebody will (hopefully) tell you how to expand to fill the drive.
Most f/s will do it "on the fly" with the partition mounted. Else you can use a liveCD.

recneps_divad 08-20-2007 10:24 PM

Those pesky details
 
Sorry. My tendancy toward brevity did me in.

The system became unbootable when we tried one of the options in Linux Ghost to expand the partition. (The partition was ext3 though fdisk says it Software Raid which is really the same I believe.) That's what killed it. Since then I've tried various things to copy the data from the old disk. I suppose I should go back to the original copy operation as before. It took five hours so I've been reluctant to do it. Now I believe that I have no choice but to go back to at least that step.

-- Dave

syg00 08-20-2007 11:21 PM

Wouldn't it be easier to let the mirroring handle it ???.
Break the RAID set, remove the second drive, add a new bigger disk. When the synch has finished, break the RAID, take out the final 300-Gig and add the other big drive.

Sound too easy ???. Note I don't (never have) used software mirroring.
Just me musing into my (cuppa) tea.

recneps_divad 08-21-2007 11:19 PM

It's getting better now...
 
Guys, I've really appreciated all the help you all have been trying to give me. I've actually got the server up and running again. Here's where it stands at the moment:

I pulled out one of the new 500GB drives and restored one of the old 300GB drives. I booted, the server found it and booted from it. I then used # mdadm --add to add the 500GB drive to the RAID 1 array. I waited for the sync to complete. Then I ran grub and did the setup, just to be sure, and then shut down.

Next I pulled the 300GB and put the 500GB back in. (I had previously initialized my 500GB drives with the partition sizes that I wanted. Something that I hadn't mentioned earlier is that I had a linux swap partition at the very end of the drive that I had to move back to the end. So I had properly sized partitions, just not bootable.

I booted my server and it found the cloned system on the "original" 500GB drive. I've now used mdadm --add to merge the other 500GB drive with the working one. Looks like I'm going to have a working system again.

Just one thing: the mdadm utility noticed that I had a partition on the 500GB drive that was larger than the booted partition on the 300GB system. So when it did the clone, it created a partition that's 300GB in size. The good news (if there is some) is that at least my swap partition is still at the end of my drive. So what I have now looks like this:

/dev/sdb1 1 36726 290985313+ fd Linux raid
/dev/sdb2 36227 60500 194980905 83 Linux
/dev/sdb3 60501 60801 2417782+ 82 Linux swap

So my challenge is to expand sdb1 to include the space found in sdb2. Ideas?


-- Dave

syg00 08-22-2007 12:26 AM

Simplest (and safest) answer:
- create a new RAID pair in the spare 200Gig on each disk. Then move your biggest/fastest growing data over there.

Less simple:
- break the pair, expand the partition and filesystem, rebuild the pair.
I presume this mirrors (er, sorry about that :p ) what you tried to do last time.

recneps_divad 08-22-2007 12:54 AM

Alex, I'll go for combining partitions for 200
 
Safe answer? Not a chance! Actually, for what I want to do with this system (host websites), having data split in different places does me no good. So, on with expanding the partition.

I like your suggestion about breaking the pair. That at least gives me some fall-back position if the partition games don't work. (Not that all is lost - I still have the original 300GB drives as the backup.)

So what utiities did you say I might try and use?

Thanks again for the guidance,


-- Dave

syg00 08-22-2007 07:20 PM

Was responding when I re-read your first post - uh-oh.
Just noticed you are Centos - does that mean you are also running LVM ???.

recneps_divad 08-23-2007 12:41 PM

CentOS, yes. LVM, no.

syg00 08-23-2007 11:18 PM

Given that mdadm is device layer, and you're interested in filesystem layer, try something like gparted. It has a livedCD, and is a GUI (think Partition Magic). Grab the edge of the partition and drag it. It'll fix the filesystem at the same time.
I'd use the liveCD to make sure there are no mount issues.
Then boot up and see if mdadm will still talk to you - then add the other disk.

BTW, the reason I said "safest" was because you can build the second RAID set without risking your current data. Generally considered a positive attribute in a production environment.
Once built, it's just a mount point.
Your data, your choice.

recneps_divad 12-18-2007 05:36 PM

Closure...
 
If anybody stumbles on this thread in the future, here's the epilogue to my tale.

I found a guy who knows a lot more about Linux than I did. We groped around a lot trying some of the same things that I had tried before. We finally hit on the answer that best solved the problem. (I'm paraphrasing here since he did the work remotely.)

The answer was to load the old system drive into a USB enclosure for later access. We installed the new drives in the server and did a fresh install of CentOS using RAID and LVM for the partitions. He then made a tar-ball of the old system and restored it over the top of the system he had installed. This gave us the benefit of having a properly configured RAID of the proper size and the operating system configured (and user files) as we had wanted it.

The server is now up and running and I'm a very happy fellow. Many thanks to all that had chimed in to help.


All times are GMT -5. The time now is 04:27 AM.