LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 03-21-2020, 11:56 AM   #1
uyjjhak
LQ Newbie
 
Registered: Feb 2020
Posts: 11

Rep: Reputation: Disabled
how to control how many times loop will reply ?


Long time ago I have used in my script:

Code:
for i in `ls  /directory`                                                                                                                                                                      
do  command &
done
And that working, but as the files qty increased I am getting OOM errors as that command starts all occurencies in one time.

I would like to find any way to limit number of parallel loop runs:

Like run the loop 5 times, when one of the occurencies will finish, andd another one, and after that add next one, or two, or any number to keep 5 concurrent


Is that possible ? That is the script which I am using to make lot of connections to remote storage - overnight I would utilise 100% of tunnel bandwith and the 'command' doesn't support multithreading .

Yeah, I see parallel package, but I can't change this scripts

regards
 
Old 03-21-2020, 12:44 PM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,849

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
I don't undersand. If you cannot change the script what do you want to do?
Anyway, bash is not the best tool to implement it (obviously it is possible, using perl/python which can easily handle thread or job pools).
You can try parallel or xargs. https://stackoverflow.com/questions/...ool-bash-shell
 
1 members found this post helpful.
Old 03-21-2020, 01:02 PM   #3
uyjjhak
LQ Newbie
 
Registered: Feb 2020
Posts: 11

Original Poster
Rep: Reputation: Disabled
UPDATE FINISHED - in short - I dont know how to put otgether script from SO and my problematic script.
Thank you for your answer, and sorry for mistake.
I don't want make many modifications in the script, but I need to find any solution as the OOM killer stops the scripts at the middle of the night, and no one checking that before morning.

So, I have found what it could be usable for me: https://stackoverflow.com/a/56564900

but I am not sure how to put the loops and that script together.


I am making only one modification at the end - 10000 runs with only 2 parallel executions (for example only).
Code:
function task() {                                                           
    local task_no="$1"                                                      
    # doing the actual task...                                              
    echo "Executing Task ${task_no}"    
                                    ## COMMAND TO REPEAT HERE
    ## but - in my case should I put here loop?
## like this:
    
for i in `ls  /directory` do  command & done
    
# which takes a long time                                               
    sleep 1                                                                 
}                                                                           
                                                                            
function execute_concurrently() {                                           
    local tasks="$1"                                                        
    local ps_pool_size="$2"                                                 
                                                                            
    # create an anonymous fifo as a Semaphore                               
    local sema_fifo                                                         
    sema_fifo="$(mktemp -u)"                                                
    mkfifo "${sema_fifo}"                                                   
    exec 3<>"${sema_fifo}"                                                  
    rm -f "${sema_fifo}"                                                    
                                                                            
    # every 'x' stands for an available resource                            
    for i in $(seq 1 "${ps_pool_size}"); do                                 
        echo 'x' >&3                                                        
    done                                                                    
                                                                            
    for task_no in $(seq 1 "${tasks}"); do                                  
        read dummy <&3 # blocks util a resource is available                
        (                                                                   
            trap 'echo x >&3' EXIT # returns the resource on exit           
            task "${task_no}"                                               
        )&                                                                  
    done                                                                    
    wait # wait util all forked tasks have finished                         
}                                                                           
                                                                            
execute_concurrently 100000 2
Saved as delayer.sh

Last edited by uyjjhak; 03-21-2020 at 01:17 PM.
 
Old 03-22-2020, 08:27 AM   #4
MadeInGermany
Senior Member
 
Registered: Dec 2011
Location: Simplicity
Posts: 2,793

Rep: Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201Reputation: 1201
Here is a simple one - add a delay.
Say the average runtime is 60 seconds.
Then a delay of 10 is expected to start 6 jobs in parallel.
Code:
for i in `ls  /directory`                                                                                                                                                                      
do
  command &
  sleep 10
done
 
Old 03-22-2020, 09:38 AM   #5
uyjjhak
LQ Newbie
 
Registered: Feb 2020
Posts: 11

Original Poster
Rep: Reputation: Disabled
Thank you - that is my currently used solution
but it is not responsible when the command ends after 1sec, or 1000sec for example. Runtime of command is very different.

SO I have thougtht about keeping the static number of runned processes.
 
Old 03-22-2020, 09:52 AM   #6
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,849

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
you can do something similar probably:
Code:
jobs | wc -l # will tell you the number of background jobs
wait -n      # will wait for the "next" job to finish
# so
[[ $(jobs | wc -l) -lt 5 ]] || wait -n
# probably will help you
 
2 members found this post helpful.
Old 03-22-2020, 10:03 AM   #7
uyjjhak
LQ Newbie
 
Registered: Feb 2020
Posts: 11

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by pan64 View Post
you can do something similar probably:
Code:
jobs | wc -l # will tell you the number of background jobs
wait -n      # will wait for the "next" job to finish
# so
[[ $(jobs | wc -l) -lt 5 ]] || wait -n
# probably will help you
Thank you, looks great, but I am not sure how to use that.

How can I use this in my loop ?
Thank you in advance, I am rather begginer
 
Old 03-22-2020, 10:21 AM   #8
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,849

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
I would replace that sleep command with this one
Code:
for i in `ls  /directory`                                                                                                                                                                      
do
  command &
  [[ $(jobs | wc -l) -lt 5 ]] || wait -n
done
but you should know what do you want.
 
1 members found this post helpful.
Old 03-22-2020, 10:35 AM   #9
uyjjhak
LQ Newbie
 
Registered: Feb 2020
Posts: 11

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by pan64 View Post
I would replace that sleep command with this one
Code:
for i in `ls  /directory`                                                                                                                                                                      
do
  command &
  [[ $(jobs | wc -l) -lt 5 ]] || wait -n
done
but you should know what do you want.


woow ! thank you very much )

it works, I am starting tests now.
regards )
 
Old 03-22-2020, 03:16 PM   #10
BW-userx
LQ Guru
 
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342

Rep: Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242
something like this to get the apps pids count
Code:
$ pgrep -u $USER nm-applet | wc -l
1
 
Old 03-22-2020, 10:13 PM   #11
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,803

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by uyjjhak View Post
Thank you - that is my currently used solution
but it is not responsible when the command ends after 1sec, or 1000sec for example. Runtime of command is very different.

SO I have thougtht about keeping the static number of runned processes.
I've done this in the past using a named pipe. You start N (your desired maximum) background processes using a shell function that executes the command and then writes a "finished" message to the pipe. After N processes have been started by the main loop, the script reads from the pipe and, in effect, is waiting for one of the processes to finish. At that point you're down to N-1 parallel processes so you launch the next one using the shell function. Repeat until the main loop runs out of things to do.

I may have an old script that I think I can massage pare down to the basic to show the process if you need to see it.

Update: Ah... I see you've found something that is similar to my old script. Let me know if you want to see my implementation.

Last edited by rnturn; 03-22-2020 at 10:15 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Can't believe I never joined here before! Certainly have visited many, many times. prairiedad LinuxQuestions.org Member Intro 2 08-13-2018 04:04 AM
Quick Reply "post reply" button. IsaacKuo LQ Suggestions & Feedback 16 07-01-2018 02:52 PM
[SOLVED] Resize a partition many, many times lucmove Linux - General 7 11-06-2014 03:07 PM
post reply & submit reply buttons annehoog LQ Suggestions & Feedback 10 01-05-2004 06:43 PM
Evolution - reply - missing own reply emanuelgreisen Linux - Software 0 01-31-2003 04:40 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 05:48 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration