bash script problems: scp/ssh from the node of a cluster to the other server
I am running a series of simulations on a cluster node. The output of the simulations are some pretty large data files, so I would like to transfer them to the storage server as soon as they are made. I would like to do this in a bash script (there are 68 simulations being run, two at a time, each lasting 2-3 hours so I do need to automate).
The problem is, I can't ssh or scp from the node to the storage server.
(Yes, I have set up and ssh identity file in my home dir on the cluster head and the authorization_key in the home dir on the storage server, so that I am not being asked for password when I scp through the script).
Since node doesn't communicate with the outside world, I am assuming I need to go back to the head and scp the file, then go back to the node to do another simulation. All this in a script. Can anyone help me with this? I tried
Any help would be appreciated.
Maybe do the master and servant thing. Separate functionality. You don't need to be on the node all of the time. On the node I would run an independant script that accepts jobs from head and returns their status. On head I would run the "master" script that does job management like accepting a list of jobs from you, scheduling jobs on the node and polling their status and initiating transfers on job finish. This way you don't need to move back and forth which is resource efficient and allows you to loose your sanity in other ways :-]
Thanks. That's what I ended up doing. I did lose my sanity first though:cry:
|All times are GMT -5. The time now is 08:57 AM.|