LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (http://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   forcing running multiple CPU on server (http://www.linuxquestions.org/questions/linux-newbie-8/forcing-running-multiple-cpu-on-server-883856/)

choconlangthang 06-01-2011 12:08 AM

forcing running multiple CPU on server
 
Hi,
There's a blade server, I have to ssh to it every time to run my program. And the program got run on one cpu. If I want to run the program N times in parallel, I have to open N ssh terminal, which is painful. Is there anyway I can force the server to run N times parallely by script?

(I don't mind rewrite my program in java code if that helps. I tried multithreading but it seemed not working, still only one cpu got running.)

evo2 06-01-2011 12:37 AM

Hi,

here is an example onliner assuming you want to run 16 jobs and are using bash.

Code:

ssh somemachine 'for i in {1..16} ; do someprogram >& out$i.log & done'
Evo2.

Tinkster 06-01-2011 12:38 AM

The question is somewhat vague ... a simple way would be to background the
process (that is, if it doesn't require user input while it runs).

Code:

#!/bin/sh
prog&
prog&
prog&
prog&
...


Cheers,
Tink

choconlangthang 06-01-2011 12:59 AM

thanks for reply evo2 and Tinkster.
To be specific, the way I normally run my program, say, for 2 times, with 2 input sets is

SSH to server then
Code:

java myPro input1.xml
SSH to server then
Code:

java myPro input2.xml
so how do I achieve that? tks

evo2 06-01-2011 01:06 AM

Hi,

Code:

ssh somemachine 'for i in {1..2} ; do java myPro input$i.xml & done'
If your progam produces output on stdout it would be best to pipe it to a log file.

Eg.
Code:

ssh somemachine 'for i in {1..2} ; do java myPro input$i.xml >& output$i.log & done'
Evo2.

choconlangthang 06-01-2011 01:23 AM

tks evo2, work like a champ. However I'm still curious, the programs run, but my prompt is still at my machine (like me@mymachine rather than me@server)?

Nylex 06-01-2011 01:26 AM

Presumably that's because ssh runs the command (essentially everything in between '') on the remote machine and then quits. You could of course put those commands in a script, log in and run that script.

sundialsvcs 06-01-2011 10:33 AM

You can control how many instances of a process are running, but you can't control which CPU(s) it may run on at any particular time.

(I'm assuming that you're not dabbling with "CPU affinity ..." That is Very Advanced Magick.)

tlhonmey 07-11-2011 03:44 PM

I would highly recommend that you install the program "screen".

This is a very nifty little tool for doing exactly the kind of thing you want to do. You ssh in once, and then start as many screen sessions as you like to run your different instances of the program. Additionally, if your connection gets interrupted, screen will simply detach and keep running, instead of killing your program.

Screen's man page has the complete set of options for creating and controlling screen sessions. I recommend using the -S option to give your session a sensible name so that it's easy to keep track of which is which.


This assumes, of course that you're permitted to install things on the server. However it's a fairly trivial program to install and may even be installed already. If it's not, then whoever's managing the server may well be interested in such a thing as well, so asking for it's probably not too much of a long-shot.

baxzius 07-11-2011 04:39 PM

A example:

shell script: run a batch of N commands in parallel, wait for all to finish, run next N
Task: run blocks consisting of 3-5 commands (in parallel/background). Example block:
dd if=/dev/urandom of=/mnt/1/x bs=1024 count=1024000000 &
dd if=/dev/urandom of=/mnt/2/x bs=1024 count=1024000000 &
dd if=/dev/urandom of=/mnt/3/x bs=1024 count=1024000000 &

When it's done, next block should run. I suppose, this can be done via lock files:
task1.sh:
real_task1 real_param1 ; rm /var/lock/myscript/task1.lock

task2.sh:
real_task2 real_param1 ; rm /var/lock/myscript/task2.lock

taskgen.sh:
# loop
# while directory isn't empty - wait...
gen_tasks.pl # build task files from some queue
for i in 1 2 3; do touch /var/lock/myscript/task$i.lock ; done
./task1.sh &
./task2.sh &
./task3.sh &
# if task1.sh doesn't exits then exit, else loop waits for files to be deleted

A number of methods to check if the directory is empty can be found here, don't sure which to use;

chrism01 07-12-2011 02:12 AM

try the bash 'wait' cmd
Quote:

wait [n ...]
Wait for each specified process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for.
http://linux.die.net/man/1/bash


All times are GMT -5. The time now is 01:07 AM.