Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I could not find a fix to the issue on this post Click here to see post.
So I would like to know what alternatives I have with to schedule a job to run?
(If previous thread has not been solved, then you can continue from there.)
Well, I am not much sure about expect, but if your purpose is only to get uptime of remote servers, you can use ssh with uptime cmd as follow:
Code:
#!/bin/bash
serverlist=/tmp/serverlist.txt # Contains a list of servers
while read -r server
do
ssh -l <username>@$server uptime >> /path/to/output_file
done < $serverlist
Note: In this case, you will first need to activate login without password on all remote servers from the machine where you'll run this script. Then you can schedule it in crontab.
I still think you should fix your script to be able to run using Cron.
In any case here is the probably dirtiest solution ever proposed in this forum: what about running something like "nohup watch -n <run_every_X_seconds> /yourdir/yourscript.sh &"?
Thank y ou for the sugestion but that is what I wanted to avoid...
...without giving any compelling reason for it? Note having root log in over any network isn't exactly a security best practice. Next to that your method doesn't really scale well and besides there are other ways to get uptime ranging from 'ssh unprivileged_user@host 'cat /proc/uptime';' to ones that don't even require any interactive login like SNMP or even a simple Xinetd service.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,800
Rep:
Quote:
Originally Posted by unSpawn
Note having root log in over any network isn't exactly a security best practice.
At least it's over an encrypted link. My main concern with security would be having the passwords saved in the shell script that was being executed.
Quote:
Next to that your method doesn't really scale well and besides there are other ways to get uptime ranging from 'ssh unprivileged_user@host 'cat /proc/uptime';' to ones that don't even require any interactive login like SNMP or even a simple Xinetd service.
There's a fair amount of work involved -- more than just knowing an account/password -- to setting up SNMP (for security reasons please don't use the default community names and be aware that the uptime you get back from the SNMP daemon is the elapsed time since the daemon was last started and not necessarily the true system uptime) or a new xinetd service.
I'm guessing this is quite likely far beyond what should be mentioned in the "Newbie" forum but if obtaining uptime is going to be an important thing to track, why not set up a Nagios monitoring system on the network? It's probably only a matter of time before someone's going to want to know the state of file system use or any number of parameters on all those systems. I recall it being a fairly simple matter to implement custom queries on the Nagios server once you had NRPE configured on the monitored systems. Caveat: it's been a few years since I did any of that but it seemed easy enough. Probably more involved than the other things you mentioned but more flexible.
Personally, I still want to know what's eating the output of the uptime commands the OP has been using in his script. It's doing that on my system as well and I'm still scratching my head over it.
Expect is going to scan the input and consume what it doesn't recognize while looking for what it "expects". It looks like it would work if you removed expect from the script. It also might work if you did a "send uptime" after sending the password instead of using the command parameter. Remember, expect is not working from a pty...
This may cause a problem with expect as the remote connection terminates as soon as the command terminates... and that may be detected/signaled (socket closed) before the data is retrieved from the buffer that expect has.
Also the grep -v command shouldn't be needed with the command parameter as there is no "root:" prompt in the return data (which also points to the data in the buffer not being processed).
Try using a send "uptime" entry instead of using the uptime on the ssh command... This also forces sshd to use a pty on the remote end.
And that in turn also brings up another possibility - ssh has a -t option to direct sshd on the remote system to use a pty even with parameter commands. This may make it work too - as it would wait for the pty to close before closing the socket.
Try a send 'spawn ssh -t ...' command to expect...
...My main concern with security would be having the passwords saved in the shell script that was being executed
Saving password in script itself isn't recommended option at all. And even ssh will not accept it if you will use something like assign password to some variable first and then use this variable while invoking ssh cmd.
As I said above, you can use key based ssh authorization to achieve this. This is a common practice & you should consider using this option.
There's a fair amount of work involved -- more than just knowing an account/password -- to setting up SNMP (..) or a new xinetd service.
Code:
[ `id -g` -eq 0 ] || echo need root; umask 0027
cat > /etc/xinetd./uptime << EOF
service uptime
{
disable = no
type = UNLISTED
protocol = tcp
port = 30000
socket_type = stream
wait = no
user = nobody
server = /usr/bin/uptime
log_on_failure += HOST
only_from = 127.0.0.1 10.1.0.0/16
}
EOF
chmod 0640 /etc/xinetd./uptime
selinuxenabled && chcon -u system_u -t etc_t /etc/xinetd./uptime
/sbin/service restart xinetd
...and that's all there is to it.
- enabling SSH agent on the monitoring host alleviates the need for sending passwords.
- A forced "/usr/bin/uptime" command in the clients authorized_keys file alleviates the need for executing the command.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.