Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
And that working, but as the files qty increased I am getting OOM errors as that command starts all occurencies in one time.
I would like to find any way to limit number of parallel loop runs:
Like run the loop 5 times, when one of the occurencies will finish, andd another one, and after that add next one, or two, or any number to keep 5 concurrent
Is that possible ? That is the script which I am using to make lot of connections to remote storage - overnight I would utilise 100% of tunnel bandwith and the 'command' doesn't support multithreading .
Yeah, I see parallel package, but I can't change this scripts
I don't undersand. If you cannot change the script what do you want to do?
Anyway, bash is not the best tool to implement it (obviously it is possible, using perl/python which can easily handle thread or job pools).
You can try parallel or xargs. https://stackoverflow.com/questions/...ool-bash-shell
UPDATE FINISHED - in short - I dont know how to put otgether script from SO and my problematic script.
Thank you for your answer, and sorry for mistake.
I don't want make many modifications in the script, but I need to find any solution as the OOM killer stops the scripts at the middle of the night, and no one checking that before morning.
but I am not sure how to put the loops and that script together.
I am making only one modification at the end - 10000 runs with only 2 parallel executions (for example only).
Code:
function task() {
local task_no="$1"
# doing the actual task...
echo "Executing Task ${task_no}"
## COMMAND TO REPEAT HERE
## but - in my case should I put here loop?
## like this:
for i in `ls /directory` do command & done
# which takes a long time
sleep 1
}
function execute_concurrently() {
local tasks="$1"
local ps_pool_size="$2"
# create an anonymous fifo as a Semaphore
local sema_fifo
sema_fifo="$(mktemp -u)"
mkfifo "${sema_fifo}"
exec 3<>"${sema_fifo}"
rm -f "${sema_fifo}"
# every 'x' stands for an available resource
for i in $(seq 1 "${ps_pool_size}"); do
echo 'x' >&3
done
for task_no in $(seq 1 "${tasks}"); do
read dummy <&3 # blocks util a resource is available
(
trap 'echo x >&3' EXIT # returns the resource on exit
task "${task_no}"
)&
done
wait # wait util all forked tasks have finished
}
execute_concurrently 100000 2
Thank you - that is my currently used solution
but it is not responsible when the command ends after 1sec, or 1000sec for example. Runtime of command is very different.
SO I have thougtht about keeping the static number of runned processes.
jobs | wc -l # will tell you the number of background jobs
wait -n # will wait for the "next" job to finish
# so
[[ $(jobs | wc -l) -lt 5 ]] || wait -n
# probably will help you
jobs | wc -l # will tell you the number of background jobs
wait -n # will wait for the "next" job to finish
# so
[[ $(jobs | wc -l) -lt 5 ]] || wait -n
# probably will help you
Thank you, looks great, but I am not sure how to use that.
How can I use this in my loop ?
Thank you in advance, I am rather begginer
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,803
Rep:
Quote:
Originally Posted by uyjjhak
Thank you - that is my currently used solution
but it is not responsible when the command ends after 1sec, or 1000sec for example. Runtime of command is very different.
SO I have thougtht about keeping the static number of runned processes.
I've done this in the past using a named pipe. You start N (your desired maximum) background processes using a shell function that executes the command and then writes a "finished" message to the pipe. After N processes have been started by the main loop, the script reads from the pipe and, in effect, is waiting for one of the processes to finish. At that point you're down to N-1 parallel processes so you launch the next one using the shell function. Repeat until the main loop runs out of things to do.
I may have an old script that I think I can massage pare down to the basic to show the process if you need to see it.
Update: Ah... I see you've found something that is similar to my old script. Let me know if you want to see my implementation.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.