Is Linux scalable from PC to PC?
Hi, I want to know if Linux(fresh installation) can be copied directly from one PC to other.
As in if I have a P4 PC with 80 GB HDD called C0mpaq744 and another machine with P3 and a 60 GB HDD say Acer331..both having completely different parts and specs inside but same architecture. So can I install Linux on C0mpaq744 and after installing it, make the same partitions on Acer331 and just copy/paste all the folders and set Grub to boot the kernel?? I hope udev will take care of the changing hardware on the fly... This I need to know as my office has multiple PC variants of same architecture and instead of installing Linux on everyPC, I wonder if everything could be just copied over. |
Quote:
|
I'd suggest just using Kickstart (Red Hat based) or FAI (Fully Automated Installer -- Debian based) to do a separate but automated install on each computer. That way the hardware detection routines within the installer will run and set up kernel modules and the like properly, but you won't need to go through the hassle of redoing the partitions and package selection (they'll be done for you, automatically).
In theory a direct copy will work, but I'd do it with dd (low level disk copying program) so that you get stuff like the MBR and partition table. Whatever you decide to do, you should practice on test hardware before trying it on production systems. |
Quote:
Quote:
My guess is that the probability is greater than 50:50 it won't work at first; you'd have much better chances if they were all, say, Compaqs of the same vintage as then they probably would have the same chipsets, or if you knew that they all had Intel chipsets. Maybe you can relatively easily fix it, but it still might be more time than you would spend on the the kickstart/autoyast/FAI approach |
Kapz -
Regardless if you're on Windows or Linux an "install" always: a) "tunes" the binaries (for your exact CPU/CPU architecture) b) formats and partitions the drives (for your exact HD configuration) c) adds low-level, "hidden" files (including loader software written to your hard drive's master boot record) So unless everything *is* completely hardware-compatible, and unless you *do* pick up *all* files (not just those files on the "file system") ... you may or may not get a viable "clone". That's the bad news. The good news: "Ghost" works pretty well on Windows. There's a "Ghost for Linux" here: http://linux.softpedia.com/get/Syste...inux-053.shtml And like the previous responses said: a good backup tool and/or a simple "dd" are often more than good enough to create a viable clone. And there's always virtualization alternatives like VMWare or VBox! 'Hope that helps .. PSM |
Thank you all very much for directing.
So now I can install Linux on multiple networked PC's through Red Hat/Debian installer at the same time, so do I need to invoke a LAN boot and start installation from a copy of Linux saved on a server? if not then how doed it work? I know that ghosting(cloning) could be done that way, but it does not work on variying HDD's and hardware. Our tech team has proposed to migrate three different units to linux desktop..now their PC specs differ as different client provided/required different hardware. But all have same architecture i.e i686, the RAM and all other specs as well as the technology involved in it such as bus speed, frequency, GPU and motherboard chipsets, lan chipsets etc varies greatly coz some are old PC's and a few are new. Hence we need to give them the turnaround time to do this. Any help is much appreciated. |
It sounds like btmiller's suggestion is probably best for your scenario:
Quote:
|
Quote:
In any case, you can make a do-it-yourself version, if you wanted. Assuming an rpm-based and system with zypper something like (this is very much thrown together from a similar fragment that I saw; you'd really look at it in detail before using in earnest...and it would be easy to make a similar-ish script using the debian tools...in any case, getting a list of installed packages is easy and un-controversial, using that list to update to add capabilities to another machine is the bit that needs care; you could perhaps prefer to separate the 'read' and 'write' stages...and you can see from the 'untested' comments that I am very unwilling to take any responsibility for the use of this, so its at your own risk) Code:
#/bin/sh I'd get really bothered about the automatic process if you wanted to install on hundreds of units, but it seems you only have a small number. (One piece of advice that I would offer if you go this way is to use a cache; if these PCs get their data via, say, a squid cache, and you set the refresh patterns to keep rpms/debs, as appropriate, for a while after the first install, the data should come down more quickly and it will save you bandwidth.) |
Thank you v.mch salasi! :)
Yes I pretty much asked the same Q again but, wanted to confirm if I can Lan boot the installation, that way implementation time would be reduced drastically OR we will require a vendor to do just the installation. Thats the kind of thing that I wanted, I had forget about scripts! Yeah I can use your script, do some some research and maybe implement it as well. It really helps as our team is small and well Three units has 840 PC's, so automated install is the only viable option. |
At my job, I manage a large number of identical systems. I usually use PXE boot + kickstart + a local CentOS HTTP mirror to automate installation. There's also a tool called SystemImager that I've looked at some, but never used in production. As long as your machines support PXE booting, there's no need to run around sticking CDs into every machines and you can manage kernel and kickstart parameters centrally (on the boot server). Pentium III class machines are a bit old to support PXE on desktop-class hardware, but maybe you'll get lucky (or you can get a BIOS update).
|
Quote:
840 PCs makes the problem rather different....and makes your level of concern make more sense. (I'd still use the kickstart/autoyast type method, though, rather than the 'ghosting' method.) Oh, and by the way, at more of a system level, do you use DHCP? If you don't do at least something like that, you'll end up with 840 PCs with the same IP address and that's not what you want to do either. |
Hi,
Quote:
I for one would not want a forum presentation of how to install on machines that I maintain. That is if I'm qualified to perform the necessary task(s). If not then out sourcing to qualified contractors would be the way to go if your tech team doesn't have the abilities. This scenario isn't something that should be addressed from a Newbie or even a distribution forum but from a management perspective with input from qualified personnel. :hattip: EDIT: BTW, scalable & scalability |
Yeah DHCP Server, antivirus etc...all being planned.
@Onebuck - Well I am one of them in the 'IT' team. We do know basics about Linux but we all are basically Windows admins(the windows wont take the call and data load of changing users who need them 24X7), and are planning up to get Linux admins as well but to present the outline of the transition process to client IT manager we need all such details such as what distro(s), support, turnaround time etc depending upon the client needs and the distribution features. I liked the idea of CentOS as suggested by btmiller, but since its a pretty big change that we are proposing so we need to do some serious research and provide concrete timelines and the cost involved(if any). Thanks for the Wiki link. |
All times are GMT -5. The time now is 06:08 AM. |