Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We're setting up a centralized server from which we can access all other servers in our organization. The idea is to use this server for deploying scripts (typically perl, python, or shell) on our servers by logging in, running the scripts, and retreiving the result (exit code or whatever).
I'm sure every organization has a similar scheme. So what I'm look for is basically some advice one possible pitfalls and such.
One idea I have is to set up some sort of "framework" for our scripts. I haven't thought this through, but I imagine that there must be possible to create a simple framework to reduce the possiblity of deploying an erroneous script throughout the enterprise.
By such a framework I'm actually thinking of a (set of) script(s) into which one can pass parameters describing what information one need to retreive from the servers, what needs to be executed, which servers to deploy this on, and so forth. For example, if the "framwork script" took one parameter in which the set of servers are listed, and another parameter the command that needs to be executed on these servers, maybe one could use this to reduce the possiblity of error.
I'm not sure if what I'm looking for is very clear, and in that case please comment on this.
We're actually allready using Puppet, and it solves many of our needs regarding configurating the servers.
I'm not yet a Puppet expert, but there seems to be some thing best left to plain old scripts: If we, for example, need to gather information about whether the file systems on our servers have been mounted cleanly, or just run a few commands to check disk space, I'm not sure Puppet is the best way to go. It sound better to simply have a small shell/perl/python script executed on the serveres, and fetch the result to be stored on the centralized SSH server.
I may be that I underestimate the functionality of Puppet, but I don't think puppet is designed for cases such as this.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.