Executing multiple commands on multiple Linux machines
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Executing multiple commands on multiple Linux machines
I have 200 Linux machines and I need some way of automation.
Tasks which I'm going to run usually network services: nmap scanners and other services (I'm working for a company which put all responsibility for security on me, and I need to scan around thousand networks per day to make sure external security is fine with computers). So this task requires from the solution: 1) Ability to quickly edit all the commands which we going to send to our 200 servers
2) Ability to get some data in return (such as tcpdumps in files upon request, archived nmap xmls, etc)
I searching solution for this problem already for a two weeks and come up with few solutions which didn't worked out, and now I going to describe what they are and why they didn't worked for me:
SOLUTION #1. Wrote my OWn FTP bash script which search executable bash scripts on some FTP folder on the Internet (I am uploaded 200 files on FTP, where each file had number)
-- Problem with this solution #1: there was too much traffic on FTP host which created a huge overload which caused in problems with scripts (plus I'm not the best bash script writer, and too big FTP timeout was already problem for my weak scripting) I could dedicate more hours to achieve result, but I believe there I don't have to use exactly this way
-- Problem with this solution #2: I'm usually had crontab or sleep method to run bash scripts for requesting and receiving back the result, and even giving that I've set minimal possible time: 17 minutes for FTP executable files existence check-ups, 17 minutes for me is too long for script start up!!! (because our FTP can't handle the load of 200 computers simultaneously refreshing the contents of the server) I need to reduce this amount of time necessary to simultaneous commands running on all machines.
SOLUTION #2: I used the combination of FTP scripts & "expect" bash scripting. There was 3 "expect"-based scripts: first received the commands; second started the file execution of received commands; third started the scripts responsible for sending data back by FTP.
Problem: I couldn't accomplish second script which should've execute the bash script which contained nmap -VV -SS IP/RANGE & - so my plan with expect didn't worked out... Now I stuck with my problem
SOLUTION #3: pccs, clusterssh, fanout&fanterm - I didn't tried this products yet, but going to try definitely, except for clusterssh, because I don't like the idea of using X to solve my problem (I saw the screenshots where multiple windows run each of which was connecting and executing some commands on remote machines)
I mentioned here in SOLUTION #3 - only 3 products, because right now I know only them, but have zero experience working with such tools.
Does anybody of you had such experience where you needed to run multiple different commands on a lots of machines simultaneously?
Please help! I will be glad for any suggestions!
So sad that there's no web/sql-products out there which helps to accomplish this task... (or maybe I missed that out in my searches?) Because if I had unlimited resources I would created such programming solution myself which I could run by using web-interface and SQL stuff...
I have 200 Linux machines and I need some way of automation.
Tasks which I'm going to run usually network services: nmap scanners and other services (I'm working for a company which put all responsibility for security on me, and I need to scan around thousand networks per day to make sure external security is fine with computers). So this task requires from the solution:
1) Ability to quickly edit all the commands which we going to send to our 200 servers
2) Ability to get some data in return (such as tcpdumps in files upon request, archived nmap xmls, etc)
I searching solution for this problem already for a two weeks and come up with few solutions which didn't worked out, and now I going to describe what they are and why they didn't worked for me:
SOLUTION #3: pccs, clusterssh, fanout&fanterm - I didn't tried this products yet, but going to try definitely, except for clusterssh, because I don't like the idea of using X to solve my problem (I saw the screenshots where multiple windows run each of which was connecting and executing some commands on remote machines)
I mentioned here in SOLUTION #3 - only 3 products, because right now I know only them, but have zero experience working with such tools. Does anybody of you had such experience where you needed to run multiple different commands on a lots of machines simultaneously?
I've used fanout and it works fine. I'd stay away from FTP, though, and concentrate on SSH/SCP/SFTP, since you can perform a one-time keyswap between one workstation and your 200 servers, and VERY easily write a simple bash script to run multiple commands, copy files, etc., on 200 servers, one at a time.
Quote:
So sad that there's no web/sql-products out there which helps to accomplish this task... (or maybe I missed that out in my searches?) Because if I had unlimited resources I would created such programming solution myself which I could run by using web-interface and SQL stuff...
What resources do you need?? You've got Linux, and any programming tools/languages/etc. you need to develop this application. Go right ahead and do it. But you are not thinking of one simple fact: no matter what the FRONT END is, you are going to need a method of authentication BEHIND THE SCENES, and a method of having commands executed remotely. So you have a pretty screen in a web browser...how is the program behind it going to talk to a remote server?
So you have a pretty screen in a web browser...how is the program behind it going to talk to a remote server?
By web-interface I just meant the simplicity of using software. I thought maybe there's some famous central management systems or stuff...
Right now I don't have a choice to try fanout.
I just wondered how the other system managers handle their servers without ready-to-use products...
[spoiler]
Talking idealistically, if I would be a developer I would develop such a system which avoids SSH authentication and uses some other secure sync protocol with "push" way of accepting data (e.g. central server sends necessary command by secure push request, the client machine accepts it and immediately executes accepted commands, on the other hand central server provide to admin user-friendly page which could offer him performing some tasks by clicks, without even setting up a command e.g. if I need to scan a range of IP addresses, I will just have my text-file to paste to a special input on a web-page and web-server will automatically accept any format of my IP-addresses, then it will even ask me to whether I will select the machines for handling following ranges, or whether the system will automatically execute the commands according to planned network load -- i.e. iPhone is also UNIX and it syncs its contacts without accessing someone's SSH, GMail/native mail app even supports accepting mail notifications by push protocol).
[/spoiler]
Nagios is worth looking at: http://support.nagios.com/
They offer support, training, and there's online docs. There's even a sort of live demo you can try with mock servers and output.
I can't give a fair assesment of the plugin system, but it looks like scripts and a common output format that Nagios can read, so probably pretty simple.
By web-interface I just meant the simplicity of using software. I thought maybe there's some famous central management systems or stuff...Right now I don't have a choice to try fanout. I just wondered how the other system managers handle their servers without ready-to-use products...
Usually by using scripts and SSH. Fanout does that, but has additional features.
Quote:
[spoiler]
Talking idealistically, if I would be a developer I would develop such a system which avoids SSH authentication and uses some other secure sync protocol with "push" way of accepting data (e.g. central server sends necessary command by secure push request, the client machine accepts it and immediately executes accepted commands, on the other hand central server provide to admin user-friendly page which could offer him performing some tasks by clicks, without even setting up a command e.g. if I need to scan a range of IP addresses, I will just have my text-file to paste to a special input on a web-page and web-server will automatically accept any format of my IP-addresses, then it will even ask me to whether I will select the machines for handling following ranges, or whether the system will automatically execute the commands according to planned network load -- i.e. iPhone is also UNIX and it syncs its contacts without accessing someone's SSH, GMail/native mail app even supports accepting mail notifications by push protocol).
[/spoiler]
There are such things, but they cost a good bit of money, and usually take a dedicated administrator (or two), and are a security nightmare, at least from an auditing/administration point of view.
I would use something like before mentioned puppy and clusterssh looks good as well but only for administrative task like running updates or putting new files on say like webservers. For testing stuff I would use a monitoring suite that also has the ability for active checks. I don't know if its possible with nagios but with zabbix you can run nearly everything on a remote machine from your master. As you also have the option to make a script and run that on configured machines you might quite easily achive what you describe.
I would use something like before mentioned puppy and clusterssh looks good as well but only for administrative task like running updates or putting new files on say like webservers. For testing stuff I would use a monitoring suite that also has the ability for active checks. I don't know if its possible with nagios but with zabbix you can run nearly everything on a remote machine from your master. As you also have the option to make a script and run that on configured machines you might quite easily achive what you describe.
I found out the product I was describing at the beginning Now I know how they officially call it "configuration management tool". And puppet seems pretty serious solution, but I've heard that its highly loading CPU from time to time, and according to manual there's a lot stuff to read.
Before I will start reading I will try zabbix or nagios, because actually I don't need to manage configurations, I need to send commands to machine (nmap), and take the output files back!
I think I now got what you are after. You want to run a program on a remote host and have the output of the command on your local system. Right?
I just did a kind of test with ssh and output redirection. And it works... maybe works for you
Code:
ssh name@remote-host ip addr > /tmp/here
user@host:~$ cat /tmp/here
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.1/32 scope host venet0
inet xx.xx.xx.155/32 scope global venet0:0
inet xx.xx.xx.111/32 scope global venet0:1
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
You would need ssh keyfile login and some scripting to run this on all the computers but should not be to hard to setup.
Another idea would be to mount a local filesystem through ssh and have the programs output to it.
Actually you're almost correct. But my program already writes its output into *.xml file (because usually there's too much data to handle it with usual stdout). Then I'm using perl parser for open ports analysis.
But the problem is that I need to run this not only once, if I start scanning big ranges on one machine (e.g. /16 network), to scan on nmap needs a lots of time (around 60 minutes) to get only one scan done. That's why I need the opportunity to split up scans to get quick results within 5-10 minutes, and run the scans on 20 or 200 computers (because 200 computers can simultaneously run scan on 200 networks and get this done very quickly). And since this is gonna be repetitive task, I will need an easy way to modify the command lines (i.e. ideally the easiest way for me would be having 200 ssh windows to be opened simultaneously so that I could quickly set ranges on all of the computers, but that's still would be not the easiest way).
Actually I got temporary solution for now: installing web-interface on every computer to run my scans by using iMacros scripts. So I don't really like this solution, because its not really quickest way to get the jobs done.
Okay so the best way would be to have the command and it's ouput to be on one machine. If you would redirect output to your local machine their might be short comes of the io system. But you would like to parse the output on your local machine. So how about this
Code:
for host in $(cat file_with_server_ips_one_per_line); do
# maybe send ssh to background with & at end of line
ssh $host 'your_command -o output_file_on_host'
done
You then would just ssh mount all of remote hosts and parse the output on your local machine...
for host in $(cat file_with_server_ips_one_per_line); do
# maybe send ssh to background with & at end of line
ssh $host 'your_command -o output_file_on_host'
done
I've already tried to use SSH for starting background process:
1) First problem - can't start background process e.g. "nmap OPTIONS &" just returns blank response from the SSH, while process isn't running
2) Second independent problem - I can't send nmap command through HSS, just because nmap requires "sudo"
I hate initiating 200 ssh sessions, what if I would had 1000 computers, would I used SSH sessions to run 1000 commands as well?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.