How can make this single threaded program as multi threaded ?
I have this single threaded working code which I want to convert it as multi-threaded.
The below script is started with the variable userThread which will accept the number of threads as parameter. Then it should work as per the number of threads specified while running the program. The code reads an Input ImageURLs file having single column values like 100049/65994/640x480/ALTMA_EXCLUSIVE_ACERO_2013_11.JPG 100049/65994/640x480/ALTMA_EXCLUSIVE_ACERO_2013_12.JPG 100049/65994/640x480/ALTMA_EXCLUSIVE_ACERO_2013_13.JPG and tries to upload it to a location (for different sizes declared in the "IMAGE_SIZES" variable. I want to know how I can make this as a multi-threaded program. Code:
Regards Praveen |
Stick an "&" on the end of a command to put it in the background. You can then keep tabs on the number of background jobs with "jobs | wc -l".
Stick an if-statement right before your command to check the current number of backgrounded jobs, and if it's less than your limit run the command, otherwise wait a few seconds and check again. |
Thanks Mr.suicidaleggroll,
But I am still not clear with your answer. Where to fit "&" at the end in my code above? Where I can apply number of threads loop # start threads for i in $(seq 1 $userThread) do <code> done # sleep 1 second sleep 1 |
Hi.
xargs can run multiple processes in parallel with the -P MAX-PROCS option, something like this Code:
cat ImageURLs.txt | xargs -P5 -n1 script-to-upload-one-image.sh You can play with the following example: Code:
$ printf "s\n" $(seq 5) | xargs -P5 -n1 echo Code:
$ printf "%s\n" $(seq 5) | xargs -P5 -n2 echo |
Quote:
Sticking a "&" on the end of a command runs it in the background, which means bash doesn't wait for the command to exit before moving on to the next line in the script. You would run your s3cmd command in the background, since that's the one you want to parallelize, yes? But if you just stuck a "&" on the end of your s3cmd command and called it good, your script would loop through VERY quickly and launch ALL of them in the background in a very short amount of time. Then the script would exit, and you'd have hundreds (or however many there are) s3cmd processes all running, fighting each other for resources. That's a good way to waste resources and piss off server admins, so you need one additional step. You put a while statement (I said "if" above, it should be while) right before the call to s3cmd which checks the current number of backgrounded jobs. You would use it as a "rate-limiter" of sorts, to ensure you have fewer than N jobs running simultaneously, and it would just patiently wait for the backgrounded jobs to complete before launching more. Your userThread variable would control how many jobs are allowed to be backgrounded simultaneously. Something like Code:
while [[ $(jobs | wc -l) -ge $userThread ]]; do |
All times are GMT -5. The time now is 09:07 PM. |