LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 10-08-2010, 07:05 PM   #1
Zippy1970
Member
 
Registered: Sep 2007
Posts: 119

Rep: Reputation: 17
Afraid of doing a (Debian) distribution upgrade


I built my webserver about two years ago and installed Debian Etch on it. Physically, the machine runs on a Core 2 Duo E6750 with 2GB memory and two 160GB HDDs in RAID 1 (software). A third 500GB HDD is used for daily backups (with a 7-day redundancy). It also has a remote KVM-over-IP built in. I can even control the on/off button remotely.

It has always ran perfectly without so much as a hickup. The hardware has never failed and the Debian Etch installation has been rock stable.

Unfortunately, security updates were discontinued as of February this year (2010). This means I really should upgrade to Debian Lenny.

Problem is, the webserver is located in a data-center about 2 hours from where I live. If I had the server next to me, I could easily do (try) a dist-upgrade and make preparations in case anything goes wrong - like dropping in an extra HDD or downloading a live CD to boot the machine from or even temporarily routing traffic to a backup webserver. I could also easily look up stuff on the internet (on my own PC) in case I encounter something I don't know how to solve right away.

But as it is, I either have to do a remote upgrade, or go to the data-center and do the upgrade there (but I only have physical access to my own webserver there).

I know it should be easy. But I'm afraid to do it because in case something goes wrong which can't be solved remotely, I probably have to pick up the server from the data-center and reinstall it at home meaning my (and my clients') websites will be off-line during that period.

But keeping Debian Etch is of course no option. The longer I wait, the more the server becomes vulnerable to attackers. I've already waited far too long.

I'm no Linux newbie at all, but there are some things I simply don't know enough about to comfortably do this upgrade.

The idea I had to do this upgrade, was to take the two HDDs out of RAID, and do the upgrade on one HDD. If the upgrade succeeds, I let the other HDD sync up again. If the upgrade fails, I let the HDD with the failed upgrade sync with the HDD that holds the old setup. To me, that sounds like a solid plan.

But I really don't know how to do that. Back when I built the server, I used some HowTo to setup the software RAID and so I really don't know how to break the array and rebuilt it later on. So I was hoping people here could give me tips on how to do this upgrade.

Or perhaps you have another idea how to do the distribution upgrade with my current hardware? Like I said, there's a third 500GB hard disk in the system I could use. As long as the end result is the same system I have now but with Debian Lenny instead of Etch...

The reason I don't want to just blindly do a dist-upgrade is because I tried it using an image of my webserver running inside a virtual machine, and after the upgrade the (virtual) machine would no longer boot.

Thanks in advance,
 
Old 10-08-2010, 10:05 PM   #2
Meson
Member
 
Registered: Oct 2007
Distribution: Arch x86_64
Posts: 606

Rep: Reputation: 67
First, not really the main point of the thread, but rather than doing 7 days linearly of backups, check out grandfather/father/son backups or Tower of Hanoi backups. You'll get a longer history with the same amount of backup points.

Anyway, turning RAID into a dual boot isn't a bad idea. As long as you are able to get the server booted and on the network after the upgrade, you should be able to solve the rest of your problems from home.

You can also build up a backup server at home that you can fail over to if there is a problem. Really you should have a backup server anyway in a separate datacenter if high availability is really that important to you. In fact, how do you do your development? It sounds like you have to do it directly in production. For that reason alone you should have a secondary server that can double as a backup and as a dev server.
 
Old 10-09-2010, 03:25 PM   #3
Zippy1970
Member
 
Registered: Sep 2007
Posts: 119

Original Poster
Rep: Reputation: 17
Quote:
Originally Posted by Meson View Post
First, not really the main point of the thread, but rather than doing 7 days linearly of backups, check out grandfather/father/son backups or Tower of Hanoi backups. You'll get a longer history with the same amount of backup points.
Thanks, will do.

Quote:
Anyway, turning RAID into a dual boot isn't a bad idea. As long as you are able to get the server booted and on the network after the upgrade, you should be able to solve the rest of your problems from home.
Yes, but I don't know how to turn the RAID into a dual boot...

Quote:
You can also build up a backup server at home that you can fail over to if there is a problem. Really you should have a backup server anyway in a separate datacenter if high availability is really that important to you.
I have a backup server. I mean, I have a complete webserver here at home in case of fatal hardware failure. If for whatever reason my "real" webserver fails, I can take this backup webserver to the datacenter and swap it for the other one.

Quote:
In fact, how do you do your development? It sounds like you have to do it directly in production. For that reason alone you should have a secondary server that can double as a backup and as a dev server.
I use an image of my webserver inside a virtual machine for development.
 
Old 10-19-2010, 10:17 AM   #4
Zippy1970
Member
 
Registered: Sep 2007
Posts: 119

Original Poster
Rep: Reputation: 17
So can anyone help me with my questions in my first post?
 
Old 10-19-2010, 11:34 AM   #5
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Gentoo
Posts: 2,125

Rep: Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781
I don't think that there is an easy answer to your situation and questions. The virtual machine, the difficulty of physical proximity, and the RAID all add their own complexities and when taken together, well, it gets to be a real mess.

You could remove one of the drives from the RAID, which would keep you running with your current system, however, this can cause problems by itself. The one time I did a test of my RAID 1 to see if I could safely remove a drive, one of the drives got messed up and I ended up erasing it and letting it rebuild from the other one.

An installation of Linux, in general, will port from one machine to another machine, unlike Windows, largely because the drivers get loaded at run time, and are not fixed at install time. Therefore, I might suggest that you remotely create a fresh install on new drive(s) based upon the current distribution and then cleanly shut the old system down, replace the drive(s) with the newly update one. That way, if things don't go well, you can always put your original drives back in.

If this is not feasible or you can't afford another drive(s), then I would at a minimum, make a complete imaged copy of the 160GB drive that you have verified that you can restore from before attempting to upgrade.

For what its worth, recent distribution upgrades have gone relatively smoothly compared to past events. My laptops have upgraded without incident. My server had a few bugs where I had to change the order of some of the run time init scripts to bring certain function up before others.
 
Old 10-19-2010, 10:25 PM   #6
ComputerErik
Member
 
Registered: Apr 2005
Location: NYC
Distribution: Debian, RHEL
Posts: 269

Rep: Reputation: 54
If you have the spare server at home, and the VM environment to play with safely and easily why not do just that? My typical MO is to play out any upgrade using either a VM or a mirror of the hardware, work out any issues, and then go through on the real deal. You also said you have remote KVM over IP for the remote machine, so even if an automated update should fail you should be able to resolve any issues by booting into rescue mode.

Does the data center not offer any remote hands? If you are really concerned it might be worth the effort to build your spare server out locally using Lenny and then ship to the data center. Then have the remote hands just swap servers at some predefined time, and restore from the latest backup of your sites. Once all satisfied just have the Etch server shipped back you as is.
 
Old 10-22-2010, 10:06 AM   #7
Zippy1970
Member
 
Registered: Sep 2007
Posts: 119

Original Poster
Rep: Reputation: 17
Quote:
Originally Posted by ComputerErik View Post
If you have the spare server at home, and the VM environment to play with safely and easily why not do just that?
I did and as I said in my first post, the upgrade failed on an image inside a virtual machine. And I don't know if the reason it failed is actually caused by the fact it was running inside a virtual machine.

Quote:
My typical MO is to play out any upgrade using either a VM or a mirror of the hardware, work out any issues, and then go through on the real deal.
The backup server I have isn't the same hardware, which is the reason I tested it on a VM instead (since that at least gave me the option to easily roll back to the original pre-upgrade image).

Quote:
You also said you have remote KVM over IP for the remote machine, so even if an automated update should fail you should be able to resolve any issues by booting into rescue mode.
I have no experience with rescue mode.

Quote:
Does the data center not offer any remote hands? If you are really concerned it might be worth the effort to build your spare server out locally using Lenny and then ship to the data center. Then have the remote hands just swap servers at some predefined time, and restore from the latest backup of your sites. Once all satisfied just have the Etch server shipped back you as is.
Yes, that's always an option but like I said, wouldn't it be easier to simply take the two HDDs out of RAID (since they are each other's mirror), then boot from one and try to do a dist-upgrade? If it fails, simply boot from the other HDD and resync the RAID to restore the server to its original state.

It sounds simple (to me), but I don't know how to take the HDDs out of RAID.
 
Old 10-23-2010, 06:23 PM   #8
ComputerErik
Member
 
Registered: Apr 2005
Location: NYC
Distribution: Debian, RHEL
Posts: 269

Rep: Reputation: 54
What was the failure when you tried the upgrade on the VM? I would be more inclined to work through those problems on the VM, or the similar hardware spare. Then once you have a procedure down for doing the upgrade make some backups, schedule downtime and attempt on the production server.

The idea of breaking the RAID and upgrading one drive at a time might seem like a safe way at first glance but my feeling is it will bring more problems with it. However if I were to go down this road I would likely physically remove one drive so I can be sure nothing happens to my failsafe drive. In the upgrade process it is conceivable that the new OS will see a working dive that is supposed to be a member and start a sync without asking. If the upgrade then fails you are up the creek without a paddle.
 
Old 10-25-2010, 04:57 PM   #9
Zippy1970
Member
 
Registered: Sep 2007
Posts: 119

Original Poster
Rep: Reputation: 17
Quote:
Originally Posted by ComputerErik View Post
What was the failure when you tried the upgrade on the VM?
It's been a while when I did that so I really don't remember. I do know it was an easy fix on the VM, but it just showed me that an upgrade can easily go south.

The VM is of course much different (hardware-wise) than my production server so I'm sure if the upgrade fails on the production server, it will fail on something completely different.

Quote:
The idea of breaking the RAID and upgrading one drive at a time might seem like a safe way at first glance but my feeling is it will bring more problems with it. However if I were to go down this road I would likely physically remove one drive so I can be sure nothing happens to my failsafe drive. In the upgrade process it is conceivable that the new OS will see a working dive that is supposed to be a member and start a sync without asking. If the upgrade then fails you are up the creek without a paddle.
Hmmmm. Good point.
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Distribution upgrade after cancelling it Quads Mandriva 2 02-10-2010 12:49 PM
Distribution Upgrade fails eschrock Mandriva 4 11-22-2009 02:01 PM
Upgrade distribution how to fatra2 Fedora 3 06-19-2009 01:01 AM
Distribution Upgrade locking up mickeyboa Ubuntu 1 10-28-2007 11:56 AM
Afraid to reboot after upgrade to 10.0 geomatt Slackware - Installation 7 07-28-2006 07:37 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 04:23 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration