LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   How to administer many identical desktops? (https://www.linuxquestions.org/questions/linux-server-73/how-to-administer-many-identical-desktops-614483/)

johann_p 01-18-2008 05:05 AM

How to administer many identical desktops?
 
OK, this is not really a server question in the narrow sense, but I did not know which forum would be more apropriate (tell me if there is one).

What I would like to know is how to go about administrating many computers that should have identical software installation (OS packages, CPAN packages, Ruby gems, R packages etc.), should all have the same users with the same permissions defined, should all be able to access new printers in the network etc.

This would mean that
1) all machines should be easily installed in very similar ways (apart from slight hardware differences, identical ways when it comes to software).
2) some administrative command, e.g. installing a package should be done on all computers in some automatic, systematic, well-documented way instead of having the administrator do this manually or though some dirty scripting hack.
3) configuration changes should also get distributed to all computers ... e.g. a newly defined printer should be accessible identically on all of them
4) administrative stuff should not be visible or accessible to the end users: no sycurity update notifications, no access to configuration changes via sudo etc.

So .. how is this usually done? Are there ways how to automate and organize this kind of work independent of the Linux distro used?
Or are there distros which support these things in their own way? Which (free as in free beer) distro provides the best support?

We are in a planning phase of having everyone use a Linux desktop here but we need to sort these kinds of things out beforehand.

Out of the personal preferences of many, Ubuntu would be the ideal choice, but that would mean that Ubuntu should support the actions described above in some way.

ilikejam 01-18-2008 06:30 AM

Hi.

There's two ways to do this that I can think of.
1) Use NFS to have a centralised file server that exports /usr/local or /opt or whatever to all the desktops - change the software on the NFS server and the changes will appear on the desktops automatically (since they're sharing the filesystem)

2) Set up your own apt/yum/whatever repository on a server, configure apt/yum/whatever on all the desktops to point to that, and set up a cron job that runs the package management 'update' every night to see if there's been any changes rolled out to the repository.

Dave

rupertwh 01-18-2008 07:03 AM

Hi,

I am currently planning something similar and thought about creating one 'master' package, which consists mainly of dependencies and possibly some configuration script(s).
When I want some new software installed on the workstations, I'd just add it as a new dependency. apt-get dist-upgrade on the workstations via cron would do the rest.
(So far it's just an idea, I have zero experience with creating packages yet -- comments, anyone?).
(EDIT: That seems to be pretty much what Dave suggested)

For centralized user accounts (and much more), LDAP is your friend.

johann_p 01-18-2008 11:41 AM

Quote:

Originally Posted by ilikejam (Post 3027010)
Hi.

There's two ways to do this that I can think of.
1) Use NFS to have a centralised file server that exports /usr/local or /opt or whatever to all the desktops - change the software on the NFS server and the changes will appear on the desktops automatically (since they're sharing the filesystem)

2) Set up your own apt/yum/whatever repository on a server, configure apt/yum/whatever on all the desktops to point to that, and set up a cron job that runs the package management 'update' every night to see if there's been any changes rolled out to the repository.

Dave

Thanks -- we actually have some experience with solution 1) and the problems with that solution are one reason why we are looking for something that is more like what I described earlier.
Having a central, NFS mounted, /usr/local has many problems, e.g. when administering perl packages. I have also simplified the situation a bit by saying the computers will all have the same configuration ;)

As to solution 2) this is more or less what we would like to avoid, if possible: a self-written hack :)
Since OS packages would not be the only thing to administer centrally for groups of computers, we would need similar cron-job or other scripts for CPAN packages, plugin updates, configuration file changes etc.

What I was wondering mainly is that all these tasks seem to be so common to everyone who has to deal with dozens or hundreds of Linux desktops, that I was exepecting some software solution that would be more general and more organized than just solving things by self-written scripts.

Are there packages or already hacked together scripts or software solution that help in doing this?

ilikejam 01-18-2008 01:39 PM

There's a commercial offering from Sun called Update Connection Enterprise (used to be called 'Aduva Onstage') which was designed to do what you want (it's RPM/Solaris only), but it's not cheap.

Personally, I would just create your own .deb or .rpm packages and go with an apt/yum repository - rpm files can just be containers for scripts that are to be run, so you can roll out a package which does a CPAN download when it's 'installed'.

Dave

forrestt 01-18-2008 02:29 PM

Check out cfengine (http://www.cfengine.org/). It has a fairly steep learning curve, but it will do everything you want.

HTH

Forrest

johann_p 01-18-2008 04:45 PM

Quote:

Originally Posted by forrestt (Post 3027414)
Check out cfengine (http://www.cfengine.org/). It has a fairly steep learning curve, but it will do everything you want.

HTH

Forrest

This looks very interesting indeed. Will take a while to figure and try it out, but we will definitely look at this!
Thanks!

systemnotes 01-19-2008 03:07 AM

Aside from cfengine, we use a customized system of cron scripts, so that we can force changes where needed. This works great for configuration files, such as printers, /etc/resolv.conf, NIS or LDAP files, etc. -- anything that all machines should have.

Basically, there is a central repository that all machines can access. This can be nfs, or afs depending on how secure you want it.

Machines are configured to be part of a "cluster" or "duty" through a local configuration file.

Every machine has a set of cron scripts that are called up at appropriate times through crontab.
rc.1_per_boot
rc.1_per_hour
rc.1_per_day
rc.1_per_month
rc.4_per_hour
... etc...

the scripts check for the existence of a file in the shared directory structure. The machine knows what cluster it belongs to and what duties it should perform.

If the is no file in the repository, they run at their scheduled time, find nothing, and quit. If there is a file, they either copy it or run a script, depending on how you set it up.

This works on multiple platforms (hpux, solaris, linux) if done properly.

Rather than install software, it is easier to access it from nfs using rc.1_per_day, e.g.

rm -f /usr/bin/perl
ln -s /pkgs/perl/5.6.1/bin/perl /usr/bin/perl

You probably wouldn't do this with perl, but you get the idea how easy it is to "upgrade" the local version to a centrally located file. The problem is that you can't do this for laptops, since they will require local files.

-----
As for upgrading on multiple machines. This is not an easy task. I wouldn't recommend using a global yum update, because it is difficult to know what each machine has, and also to limit the repository. If you do this, be prepared to reimage some machines if everything breaks.

yum is a good way to do updates, but it has to be controlled, and probably shouldn't be left running automatically. If someone modifies their yum.conf, and an update runs it can mess up the machine.


All times are GMT -5. The time now is 03:04 AM.