Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Im becoming lazy these days I rely on a script that is written in bin/sh that allows me to run commands on linux servers, about 45 servers to be exact. I use it mostly grab the uptime. While I have nagios plugged into about 1/2 of them I still rely on the script to grab uptime and mail queue counts.
Its not to effective but it does work majority of the time. I just execute it as ./file.sh uptime and tada.
Problem is, its really slow. It has to grab a password file thats located somewhere else, and uses a secondary file so it can understand the password file, otherwise its just a file with letters and numbers on it, you wouldnt know what it was for. If that site is down, I cannot run the script. Or it its running slow the script times out. The original programmer who wrote it no longer works for us so im looking for a new method to run more stuff aside from just uptime.
I can run ./file. sh ps auxwww > somefile.txt and I can see entire contents of whats running on all 45 systems, but I wanted to get an ls -l of say /tmp the file is un-readable so not all stuff can be ran. Another way is catting contents into a file like xinetd or syslog.conf
cat something >> syslog.conf wont work, you would need to cd /etc/ && cat something >> syslog.conf
Not always does that work either, I have to go back and repair what it doesn't do, reason is it's only meant to log in as root and simple commands, it has a hard time understanding anything otherwise... if I wanted to change something on the nrpe.conf file thats plugged into nagios, id have to do them all manually.
So anyway now that you have the idea, im looking for a new method of running any command I want on 45 different servers. It can use a local file to get a root password, or SSH keys. Anyone have any ideas ?
Two ideas. First, the SSH servers on the target may be trying a reverse DSN lookup. If your server isn't listed anyway you can disable it in the configuration. You could have the remote servers make the connection to your local server and send the info, in a cron job perhaps. If it's just the uptime you are interested in, perhaps they could simply email your machine with the info.
Can't you just let the remote systems send an email once a day with the statistics you need?
That would be a lot of emails a day, a lot. The script can get the info all at once and can email it to me in just one email with results of all the systems when it does work.
Wonder what type of server are you talking about. But in my company we are using a program called xCat. Its generally for kick starting lots of server at the sametime. It also come with a command called psh that works as a parallel ssh.
All the systems are linux, CentOS, Fedora, pretty much redhat knockoffs or whatever, its a handy script, when it works right. it just a few files writtin in perl and plain text
Why not use a key? Doing that, you eliminate the need for a password file that appearently sits on another server somewhere. Once that is done, you could set up a simple bash script to do something such as ...
Code:
#!/bin/bash
server_list=(server1 server2 server3 ...)
temp_file=$(mktemp /tmp/tmp.XXXXXX)
if [ "$1" != "" ]
then
for i in ${server_list[*]}
do
echo "-- ${i} --" >> ${temp_file}
ssh ${i} "$*" >> ${temp_file}
done
mailx -s "Server Report $* - $(date +%d-%b-%Y)" email_address < ${temp_file}
#rm -f ${temp_file}
else
echo "Please enter a command to run."
fi
exit
That should give you what you are looking for (though the mailx command might need some work. I pulled it off of an old Solaris box) You'll need to expand the server_list variable to what you want and change the email_address obviously. I used mktemp so that multiple commands can possibly be run at the same time. (Otherwise, you'd have truncations of some output, and interspaced data from different commands ... really messy.) I commented out the removal of the temp_file. If it is something that you want to keep, you can put a move in there in something a bit more appropriate ...
Code:
mv ${temp_file} $1_$(date +%d-%b-%y-%H%I).dat
say, so that you know what command was run and all. Added the %H%I to the date should you need to run a command more than once a day, the previous runs data wouldn't be deleted for comparison's sake.
=====
NOTE: If you do this, you should restrict the script and be VERY careful when using it. It is one thing to type a bad command into one server, but to send that one bad command to 45 servers ... /shutter.
=====
Last edited by Hobbletoe; 06-22-2006 at 10:02 AM.
Reason: Improved the script after having run it on Linux and not Solaris. Also checks to see than a command has been given to run.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.