Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
There's a blade server, I have to ssh to it every time to run my program. And the program got run on one cpu. If I want to run the program N times in parallel, I have to open N ssh terminal, which is painful. Is there anyway I can force the server to run N times parallely by script?
(I don't mind rewrite my program in java code if that helps. I tried multithreading but it seemed not working, still only one cpu got running.)
Presumably that's because ssh runs the command (essentially everything in between '') on the remote machine and then quits. You could of course put those commands in a script, log in and run that script.
I would highly recommend that you install the program "screen".
This is a very nifty little tool for doing exactly the kind of thing you want to do. You ssh in once, and then start as many screen sessions as you like to run your different instances of the program. Additionally, if your connection gets interrupted, screen will simply detach and keep running, instead of killing your program.
Screen's man page has the complete set of options for creating and controlling screen sessions. I recommend using the -S option to give your session a sensible name so that it's easy to keep track of which is which.
This assumes, of course that you're permitted to install things on the server. However it's a fairly trivial program to install and may even be installed already. If it's not, then whoever's managing the server may well be interested in such a thing as well, so asking for it's probably not too much of a long-shot.
shell script: run a batch of N commands in parallel, wait for all to finish, run next N
Task: run blocks consisting of 3-5 commands (in parallel/background). Example block:
dd if=/dev/urandom of=/mnt/1/x bs=1024 count=1024000000 &
dd if=/dev/urandom of=/mnt/2/x bs=1024 count=1024000000 &
dd if=/dev/urandom of=/mnt/3/x bs=1024 count=1024000000 &
When it's done, next block should run. I suppose, this can be done via lock files:
real_task1 real_param1 ; rm /var/lock/myscript/task1.lock
# while directory isn't empty - wait...
gen_tasks.pl # build task files from some queue
for i in 1 2 3; do touch /var/lock/myscript/task$i.lock ; done
# if task1.sh doesn't exits then exit, else loop waits for files to be deleted
A number of methods to check if the directory is empty can be found here, don't sure which to use;
wait [n ...]
Wait for each specified process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for.