I'm over my head - need a cookbook for transfering system to new disks
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm over my head - need a cookbook for transfering system to new disks
I'm hoping that somebody can help me by directing me to a step-by-step
cookbook of some sort for what I'm trying to do. Here's the summary:
We've got a Dell SC1425 1U server. The system has dual 300GB SATA drives
that I've configured in a software RAID 1. It's running CentOS 4.5 and
is pretty much up to date. We've also got Plesk installed on it and
it's purpose to exist is to host websites.
We recently got this idea to replace the 300GB drives with 500GB drives.
My programmer and I thought that the best way to accomplish this would
be to use Linux Ghost to copy the contents of the drives. So we pulled
one of the 300's and replaced it with the 500. Then we ghosted the 300
to the 500. Next we replaced the 300 with the other 500 and used ghost
again to copy the data on the first 500 to the second. We rebooted the
system and it worked! Well, almost.
The problem was that Linux Ghost did a fantastic job of duplicating
the exact dimensions of the 300 drives onto the 500. So we had a system
with partitions that were only 300GB in size. Obviously we wanted to
take advantage of all that extra space. And that's where things went
wrong.
We've tried a number of things from using Linux Ghost to expand the
partition to using fdisk and dd to redefine the partitions and then
copy the data from one of the 300's using an external USB 2.0 drive.
Nothing works. Where did we go wrong?
IMHO you would have been better using "cp -a .." (per partition), or rsynch - something like that. It would have used the target filesystem as it was defined rather than forcing the source filesystem definition on you.
Most (all ??) filesystems can be expanded o.k.- by default they will fill the parition, so no need to screw around trying to work out exact sizes. Each filesystem will have its own command.
Last edited by syg00; 08-20-2007 at 05:53 PM.
Reason: Typo - also add last sentence.
Distribution: Ubuntu(EOL) e.g. 10.04/9.10/6.10, Crunchbang 10, SuSe 9.x, Slackware 10.x
Posts: 62
Rep:
Maybe the LiveCD of Gparted could help you to configure your partitions. ASPLinux is another word coming up here, as it allows to modify partitions with Data.
Thanks for trying to help guys. When I said that I had a problem in my post, I meant that I now have an unbootable system. I should have also disclosed that although I have 30+ years of computer experience and am fairly clever, I'm still a Linux newbie and am a bit over my head. Not that I don't know about boot-blocks and all of that; it's just that I'm still learning this stuff. That's why I've decided to fall on my sword and hope that somebody can give me some link to a step-by-step resource on what to do in these situations.
When I said that I had a problem in my post, I meant that I now have an unbootable system.
Mmmmm - that wasn't conveyed by
Quote:
We rebooted the system and it worked! Well, almost.
I read that as you were (merely) pissed off that you couldn't use the extra 200 Gig.
Presumably you still have the 300 Gig drives. Why not simply redo the operation from the start. You know that (sorta) works. Get that working again, and tell use what filesystem type, and somebody will (hopefully) tell you how to expand to fill the drive.
Most f/s will do it "on the fly" with the partition mounted. Else you can use a liveCD.
The system became unbootable when we tried one of the options in Linux Ghost to expand the partition. (The partition was ext3 though fdisk says it Software Raid which is really the same I believe.) That's what killed it. Since then I've tried various things to copy the data from the old disk. I suppose I should go back to the original copy operation as before. It took five hours so I've been reluctant to do it. Now I believe that I have no choice but to go back to at least that step.
Wouldn't it be easier to let the mirroring handle it ???.
Break the RAID set, remove the second drive, add a new bigger disk. When the synch has finished, break the RAID, take out the final 300-Gig and add the other big drive.
Sound too easy ???. Note I don't (never have) used software mirroring.
Just me musing into my (cuppa) tea.
Guys, I've really appreciated all the help you all have been trying to give me. I've actually got the server up and running again. Here's where it stands at the moment:
I pulled out one of the new 500GB drives and restored one of the old 300GB drives. I booted, the server found it and booted from it. I then used # mdadm --add to add the 500GB drive to the RAID 1 array. I waited for the sync to complete. Then I ran grub and did the setup, just to be sure, and then shut down.
Next I pulled the 300GB and put the 500GB back in. (I had previously initialized my 500GB drives with the partition sizes that I wanted. Something that I hadn't mentioned earlier is that I had a linux swap partition at the very end of the drive that I had to move back to the end. So I had properly sized partitions, just not bootable.
I booted my server and it found the cloned system on the "original" 500GB drive. I've now used mdadm --add to merge the other 500GB drive with the working one. Looks like I'm going to have a working system again.
Just one thing: the mdadm utility noticed that I had a partition on the 500GB drive that was larger than the booted partition on the 300GB system. So when it did the clone, it created a partition that's 300GB in size. The good news (if there is some) is that at least my swap partition is still at the end of my drive. So what I have now looks like this:
/dev/sdb1 1 36726 290985313+ fd Linux raid
/dev/sdb2 36227 60500 194980905 83 Linux
/dev/sdb3 60501 60801 2417782+ 82 Linux swap
So my challenge is to expand sdb1 to include the space found in sdb2. Ideas?
Simplest (and safest) answer:
- create a new RAID pair in the spare 200Gig on each disk. Then move your biggest/fastest growing data over there.
Less simple:
- break the pair, expand the partition and filesystem, rebuild the pair.
I presume this mirrors (er, sorry about that ) what you tried to do last time.
Safe answer? Not a chance! Actually, for what I want to do with this system (host websites), having data split in different places does me no good. So, on with expanding the partition.
I like your suggestion about breaking the pair. That at least gives me some fall-back position if the partition games don't work. (Not that all is lost - I still have the original 300GB drives as the backup.)
Given that mdadm is device layer, and you're interested in filesystem layer, try something like gparted. It has a livedCD, and is a GUI (think Partition Magic). Grab the edge of the partition and drag it. It'll fix the filesystem at the same time.
I'd use the liveCD to make sure there are no mount issues.
Then boot up and see if mdadm will still talk to you - then add the other disk.
BTW, the reason I said "safest" was because you can build the second RAID set without risking your current data. Generally considered a positive attribute in a production environment.
Once built, it's just a mount point.
Your data, your choice.
If anybody stumbles on this thread in the future, here's the epilogue to my tale.
I found a guy who knows a lot more about Linux than I did. We groped around a lot trying some of the same things that I had tried before. We finally hit on the answer that best solved the problem. (I'm paraphrasing here since he did the work remotely.)
The answer was to load the old system drive into a USB enclosure for later access. We installed the new drives in the server and did a fresh install of CentOS using RAID and LVM for the partitions. He then made a tar-ball of the old system and restored it over the top of the system he had installed. This gave us the benefit of having a properly configured RAID of the proper size and the operating system configured (and user files) as we had wanted it.
The server is now up and running and I'm a very happy fellow. Many thanks to all that had chimed in to help.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.