LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 07-09-2012, 11:17 AM   #1
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Rep: Reputation: Disabled
Executing multiple commands on multiple Linux machines


I have 200 Linux machines and I need some way of automation.

Tasks which I'm going to run usually network services: nmap scanners and other services (I'm working for a company which put all responsibility for security on me, and I need to scan around thousand networks per day to make sure external security is fine with computers). So this task requires from the solution: 1) Ability to quickly edit all the commands which we going to send to our 200 servers
2) Ability to get some data in return (such as tcpdumps in files upon request, archived nmap xmls, etc)

I searching solution for this problem already for a two weeks and come up with few solutions which didn't worked out, and now I going to describe what they are and why they didn't worked for me:
SOLUTION #1. Wrote my OWn FTP bash script which search executable bash scripts on some FTP folder on the Internet (I am uploaded 200 files on FTP, where each file had number)
-- Problem with this solution #1: there was too much traffic on FTP host which created a huge overload which caused in problems with scripts (plus I'm not the best bash script writer, and too big FTP timeout was already problem for my weak scripting) I could dedicate more hours to achieve result, but I believe there I don't have to use exactly this way
-- Problem with this solution #2: I'm usually had crontab or sleep method to run bash scripts for requesting and receiving back the result, and even giving that I've set minimal possible time: 17 minutes for FTP executable files existence check-ups, 17 minutes for me is too long for script start up!!! (because our FTP can't handle the load of 200 computers simultaneously refreshing the contents of the server) I need to reduce this amount of time necessary to simultaneous commands running on all machines.

SOLUTION #2: I used the combination of FTP scripts & "expect" bash scripting. There was 3 "expect"-based scripts: first received the commands; second started the file execution of received commands; third started the scripts responsible for sending data back by FTP.

Problem: I couldn't accomplish second script which should've execute the bash script which contained nmap -VV -SS IP/RANGE & - so my plan with expect didn't worked out... Now I stuck with my problem


SOLUTION #3: pccs, clusterssh, fanout&fanterm - I didn't tried this products yet, but going to try definitely, except for clusterssh, because I don't like the idea of using X to solve my problem (I saw the screenshots where multiple windows run each of which was connecting and executing some commands on remote machines)
I mentioned here in SOLUTION #3 - only 3 products, because right now I know only them, but have zero experience working with such tools.


Does anybody of you had such experience where you needed to run multiple different commands on a lots of machines simultaneously?

Please help! I will be glad for any suggestions!
So sad that there's no web/sql-products out there which helps to accomplish this task... (or maybe I missed that out in my searches?) Because if I had unlimited resources I would created such programming solution myself which I could run by using web-interface and SQL stuff...

Last edited by earlfox; 07-09-2012 at 11:39 AM.
 
Old 07-09-2012, 12:40 PM   #2
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,670

Rep: Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970
Quote:
Originally Posted by earlfox View Post
I have 200 Linux machines and I need some way of automation.
Tasks which I'm going to run usually network services: nmap scanners and other services (I'm working for a company which put all responsibility for security on me, and I need to scan around thousand networks per day to make sure external security is fine with computers). So this task requires from the solution:
1) Ability to quickly edit all the commands which we going to send to our 200 servers
2) Ability to get some data in return (such as tcpdumps in files upon request, archived nmap xmls, etc)

I searching solution for this problem already for a two weeks and come up with few solutions which didn't worked out, and now I going to describe what they are and why they didn't worked for me:

SOLUTION #3: pccs, clusterssh, fanout&fanterm - I didn't tried this products yet, but going to try definitely, except for clusterssh, because I don't like the idea of using X to solve my problem (I saw the screenshots where multiple windows run each of which was connecting and executing some commands on remote machines)

I mentioned here in SOLUTION #3 - only 3 products, because right now I know only them, but have zero experience working with such tools. Does anybody of you had such experience where you needed to run multiple different commands on a lots of machines simultaneously?
I've used fanout and it works fine. I'd stay away from FTP, though, and concentrate on SSH/SCP/SFTP, since you can perform a one-time keyswap between one workstation and your 200 servers, and VERY easily write a simple bash script to run multiple commands, copy files, etc., on 200 servers, one at a time.
Quote:
So sad that there's no web/sql-products out there which helps to accomplish this task... (or maybe I missed that out in my searches?) Because if I had unlimited resources I would created such programming solution myself which I could run by using web-interface and SQL stuff...
What resources do you need?? You've got Linux, and any programming tools/languages/etc. you need to develop this application. Go right ahead and do it. But you are not thinking of one simple fact: no matter what the FRONT END is, you are going to need a method of authentication BEHIND THE SCENES, and a method of having commands executed remotely. So you have a pretty screen in a web browser...how is the program behind it going to talk to a remote server?
 
Old 07-09-2012, 05:29 PM   #3
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Original Poster
Rep: Reputation: Disabled
Quote:
So you have a pretty screen in a web browser...how is the program behind it going to talk to a remote server?
By web-interface I just meant the simplicity of using software. I thought maybe there's some famous central management systems or stuff...
Right now I don't have a choice to try fanout.
I just wondered how the other system managers handle their servers without ready-to-use products...

[spoiler]
Talking idealistically, if I would be a developer I would develop such a system which avoids SSH authentication and uses some other secure sync protocol with "push" way of accepting data (e.g. central server sends necessary command by secure push request, the client machine accepts it and immediately executes accepted commands, on the other hand central server provide to admin user-friendly page which could offer him performing some tasks by clicks, without even setting up a command e.g. if I need to scan a range of IP addresses, I will just have my text-file to paste to a special input on a web-page and web-server will automatically accept any format of my IP-addresses, then it will even ask me to whether I will select the machines for handling following ranges, or whether the system will automatically execute the commands according to planned network load -- i.e. iPhone is also UNIX and it syncs its contacts without accessing someone's SSH, GMail/native mail app even supports accepting mail notifications by push protocol).
[/spoiler]

Last edited by earlfox; 07-09-2012 at 05:32 PM.
 
Old 07-09-2012, 07:28 PM   #4
2ck
Member
 
Registered: Mar 2010
Location: /home/twock
Distribution: Debian
Posts: 74
Blog Entries: 9

Rep: Reputation: 21
Nagios is worth looking at: http://support.nagios.com/
They offer support, training, and there's online docs. There's even a sort of live demo you can try with mock servers and output.
I can't give a fair assesment of the plugin system, but it looks like scripts and a common output format that Nagios can read, so probably pretty simple.

Good luck.
 
Old 07-09-2012, 07:33 PM   #5
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,670

Rep: Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970Reputation: 7970
Quote:
Originally Posted by earlfox View Post
By web-interface I just meant the simplicity of using software. I thought maybe there's some famous central management systems or stuff...Right now I don't have a choice to try fanout. I just wondered how the other system managers handle their servers without ready-to-use products...
Usually by using scripts and SSH. Fanout does that, but has additional features.
Quote:
[spoiler]
Talking idealistically, if I would be a developer I would develop such a system which avoids SSH authentication and uses some other secure sync protocol with "push" way of accepting data (e.g. central server sends necessary command by secure push request, the client machine accepts it and immediately executes accepted commands, on the other hand central server provide to admin user-friendly page which could offer him performing some tasks by clicks, without even setting up a command e.g. if I need to scan a range of IP addresses, I will just have my text-file to paste to a special input on a web-page and web-server will automatically accept any format of my IP-addresses, then it will even ask me to whether I will select the machines for handling following ranges, or whether the system will automatically execute the commands according to planned network load -- i.e. iPhone is also UNIX and it syncs its contacts without accessing someone's SSH, GMail/native mail app even supports accepting mail notifications by push protocol).
[/spoiler]
There are such things, but they cost a good bit of money, and usually take a dedicated administrator (or two), and are a security nightmare, at least from an auditing/administration point of view.
 
Old 07-09-2012, 09:10 PM   #6
frankbell
LQ Guru
 
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 19,342
Blog Entries: 28

Rep: Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145Reputation: 6145
I've heard Cluster SSH mentioned favorably on TLLTS.
 
Old 07-10-2012, 12:43 AM   #7
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,362

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Sounds a lot like a job for puppet https://www.linux.com/learn/tutorial...-configuration
 
Old 07-10-2012, 01:18 AM   #8
zhjim
Senior Member
 
Registered: Oct 2004
Distribution: Debian Squeeze x86_64
Posts: 1,748
Blog Entries: 11

Rep: Reputation: 233Reputation: 233Reputation: 233
I would use something like before mentioned puppy and clusterssh looks good as well but only for administrative task like running updates or putting new files on say like webservers. For testing stuff I would use a monitoring suite that also has the ability for active checks. I don't know if its possible with nagios but with zabbix you can run nearly everything on a remote machine from your master. As you also have the option to make a script and run that on configured machines you might quite easily achive what you describe.
 
Old 07-11-2012, 06:02 AM   #9
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by zhjim View Post
I would use something like before mentioned puppy and clusterssh looks good as well but only for administrative task like running updates or putting new files on say like webservers. For testing stuff I would use a monitoring suite that also has the ability for active checks. I don't know if its possible with nagios but with zabbix you can run nearly everything on a remote machine from your master. As you also have the option to make a script and run that on configured machines you might quite easily achive what you describe.
I found out the product I was describing at the beginning Now I know how they officially call it "configuration management tool". And puppet seems pretty serious solution, but I've heard that its highly loading CPU from time to time, and according to manual there's a lot stuff to read.

Before I will start reading I will try zabbix or nagios, because actually I don't need to manage configurations, I need to send commands to machine (nmap), and take the output files back!
 
Old 07-11-2012, 07:06 AM   #10
zhjim
Senior Member
 
Registered: Oct 2004
Distribution: Debian Squeeze x86_64
Posts: 1,748
Blog Entries: 11

Rep: Reputation: 233Reputation: 233Reputation: 233
I think I now got what you are after. You want to run a program on a remote host and have the output of the command on your local system. Right?

I just did a kind of test with ssh and output redirection. And it works... maybe works for you
Code:
ssh name@remote-host ip addr > /tmp/here
user@host:~$ cat /tmp/here 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/void 
    inet 127.0.0.1/32 scope host venet0
    inet xx.xx.xx.155/32 scope global venet0:0
    inet xx.xx.xx.111/32 scope global venet0:1
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
You would need ssh keyfile login and some scripting to run this on all the computers but should not be to hard to setup.

Another idea would be to mount a local filesystem through ssh and have the programs output to it.

Last edited by zhjim; 07-11-2012 at 07:10 AM.
 
Old 07-11-2012, 08:41 AM   #11
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by zhjim View Post
I think I now got what you are after. You want to run a program on a remote host and have the output of the command on your local system. Right?

I just did a kind of test with ssh and output redirection. And it works... maybe works for you
Code:
ssh name@remote-host ip addr > /tmp/here
user@host:~$ cat /tmp/here 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/void 
    inet 127.0.0.1/32 scope host venet0
    inet xx.xx.xx.155/32 scope global venet0:0
    inet xx.xx.xx.111/32 scope global venet0:1
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
You would need ssh keyfile login and some scripting to run this on all the computers but should not be to hard to setup.

Another idea would be to mount a local filesystem through ssh and have the programs output to it.
Actually you're almost correct. But my program already writes its output into *.xml file (because usually there's too much data to handle it with usual stdout). Then I'm using perl parser for open ports analysis.

But the problem is that I need to run this not only once, if I start scanning big ranges on one machine (e.g. /16 network), to scan on nmap needs a lots of time (around 60 minutes) to get only one scan done. That's why I need the opportunity to split up scans to get quick results within 5-10 minutes, and run the scans on 20 or 200 computers (because 200 computers can simultaneously run scan on 200 networks and get this done very quickly). And since this is gonna be repetitive task, I will need an easy way to modify the command lines (i.e. ideally the easiest way for me would be having 200 ssh windows to be opened simultaneously so that I could quickly set ranges on all of the computers, but that's still would be not the easiest way).

Last edited by earlfox; 07-11-2012 at 08:44 AM.
 
Old 07-11-2012, 08:48 AM   #12
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Original Poster
Rep: Reputation: Disabled
Unhappy

Actually I got temporary solution for now: installing web-interface on every computer to run my scans by using iMacros scripts. So I don't really like this solution, because its not really quickest way to get the jobs done.

Last edited by earlfox; 07-11-2012 at 09:07 AM.
 
Old 07-11-2012, 12:06 PM   #13
zhjim
Senior Member
 
Registered: Oct 2004
Distribution: Debian Squeeze x86_64
Posts: 1,748
Blog Entries: 11

Rep: Reputation: 233Reputation: 233Reputation: 233
Okay so the best way would be to have the command and it's ouput to be on one machine. If you would redirect output to your local machine their might be short comes of the io system. But you would like to parse the output on your local machine. So how about this
Code:
for host in $(cat file_with_server_ips_one_per_line); do
# maybe send ssh to background with & at end of line
ssh $host 'your_command -o output_file_on_host'
done
You then would just ssh mount all of remote hosts and parse the output on your local machine...
 
Old 07-13-2012, 01:47 PM   #14
earlfox
LQ Newbie
 
Registered: Jul 2012
Posts: 15

Original Poster
Rep: Reputation: Disabled
Quote:
for host in $(cat file_with_server_ips_one_per_line); do
# maybe send ssh to background with & at end of line
ssh $host 'your_command -o output_file_on_host'
done
I've already tried to use SSH for starting background process:
1) First problem - can't start background process e.g. "nmap OPTIONS &" just returns blank response from the SSH, while process isn't running
2) Second independent problem - I can't send nmap command through HSS, just because nmap requires "sudo"

I hate initiating 200 ssh sessions, what if I would had 1000 computers, would I used SSH sessions to run 1000 commands as well?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Scripting ssh commands to multiple machines R00ts Linux - General 14 02-27-2012 12:51 PM
[SOLVED] Problem in executing multiple commands through password less sudo vysakh@gmail.com Linux - Server 6 08-04-2009 07:11 AM
multiple linux machines bong.mau Linux - Software 3 06-09-2007 08:54 AM
executing multiple commands by ssh jpan Linux - General 1 10-22-2004 02:12 PM
Multiple Linux Machines using sendmail... needamiracle Linux - General 13 08-28-2002 02:38 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:54 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration