LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 01-16-2012, 05:13 AM   #1
divyashree
Senior Member
 
Registered: Apr 2007
Location: Bangalore, India
Distribution: RHEL,SuSE,CentOS,Fedora,Ubuntu
Posts: 1,386

Rep: Reputation: 135Reputation: 135
running script for multiple projects


I have a list of projects numbers in a list.

In my script I use for loop to read the projects on by one and apply one function on each project .

In the function there are a number of scripts mentioned,like

getting lists with associated project form db, parse them to get the correct names and apply some codes on them.

But in my case , after entering the for loop for a project, without finishing all the tasks for that mentioned in the function .. it skips to the next project number.

The limitation of the code is it can only process one at a time.

So the lists of all the project numbers are created one after another without finishing the tasks of one project and then the code runs on all which leads to hang condition.

Can anyone suggest me to restrict this ??


My for loop looks like this:
Code:
for project in $Project_List
do

        if [ -e $TESS_BASE_LOGDB/db/.tess_lock_CatiaCATDrawing_"$HOSTNAME"_"$project" ]
        then
                echo "INSIDE FOR IF LOOP for $project"
                echo "Error : The $0 for $project is already running!!"  > $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log

                #rm -f  $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log

        else
                echo "INSIDE FOR ELSE LOOP FOR $project"
                touch $TESS_BASE_LOGDB/db/.tess_lock_CatiaCATDrawing_"$HOSTNAME"_"$project"
                echo "$Today_time"  > $TESS_BASE_LOGDB/db/.tess_lock_CatiaCATDrawing_"$HOSTNAME"_"$project"
                echo "Info : Now $0 is stared for $project!!"  > $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log
                ConvertToPDF ${project} 1>> $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log 2>&1        &

        fi
done
 
Old 01-16-2012, 08:59 AM   #2
eaberry
LQ Newbie
 
Registered: Jan 2007
Posts: 8

Rep: Reputation: 1
ConvertToPDF ${project} 1>> $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log 2>&1 &

The ampersand at the end of this command causes it to run in background. Therefore the script continues without waiting for it to finish. If you remove the "&" then each job will finish before the script continues with the next.
 
Old 01-16-2012, 09:22 AM   #3
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948
Quote:
Originally Posted by divyashree View Post
Can anyone suggest me to restrict this ??
Use a subshell, and background that. After the loop, use wait to make sure all subshells complete before the script exits.

Here is an example:
Code:
#!/bin/bash

for DIR in 1 2 3 4 ; do
    (
        # Abort if $DIR/ cannot be entered into.
        cd "$DIR/" || exit $?

        LOCKDIR=".lock"
        COMPLETE=".complete"

        # Create the lock directory. This is reasonably atomic.
        if ! mkdir -m 0700 "$LOCKDIR" &>/dev/null ; then
            MSG="$DIR: Already being worked on."
            echo "$MSG" >&2
            exit 0
        fi

        # We have the lock directory. Automatically remove it, when this subshell exits.
        trap "rmdir '$LOCKDIR'" EXIT
           
        # Exit if this directory has been completed already.
        if [ -e "$COMPLETE" ]; then
            MSG="$DIR: Already completed on $(< "$COMPLETE")." 2>/dev/null
            echo "$MSG" >&2
            exit 0
        fi

        # Redirect output and error to files.
        exec  > standard.out
        exec 2> standard.err

        #
        # Work starts here. To abort, use exit.
        #

        # Run "./firstscript.sh", abort if it fails
        "./firstscript.sh" || exit $?

        # Run "./secondscript.sh", abort if it fails
        "./secondscript.sh" || exit $?

        #
        # All work done.
        #
        
        # Mark the directory complete using current date.
        date '+%Y-%m-%d %T %z' > "$COMPLETE"
    ) &
done

wait
The above uses directory locks, since they're reasonably reliable on all systems. There are better alternatives, but they have stricter requirements, too; you didn't supply enough details for me to suggest anything better.

Directory locking is atomic on local filesystems, and usually on NFS. It is not atomic on NFS if there is packet loss or the server restarts during the operation. The flock command from util-linux package uses an interface which fails for mixed local and remote lockers, and is of course Linux-specific, so I didn't want to suggest that.

Hard links (using ln and stat -c %h) using an unique name composed of PID and hostname (.lock.$(hostname -s).$$) and checking the link count for a permanently existing lock file, works for both local and remote filesystems, but requires POSIX hard link semantics. Therefore, it will not work on ISO-9660 (CD-ROMs), FAT (MSDOS, USB sticks), or NTFS (Windows) filesystems for example.

Trivial lock files, using test-then-create-with-touch are very prone to problems. For example, another process might create the file between the check and the touch. Do not rely on those.

When outputting a message to the common standard error, I like to first compose the error message so that echo is more likely to output it in one chunk, and not mix messages from different sources. It's not perfect, but good enough for me.

The actual tasks done in the subshell have their standard output and standard error redirected to files in the work directory. This is a good idea, because it avoids the problem of mixing messages from different inputs. You could direct both to the same file (using exec > log 2>&1) if you want.

Instead of running the scripts ./firstscript.sh and ./secondscript.sh in the job directory, you could do any other tasks. Just remember to exit to abort the task, and a subsequent command will retry it from the beginning. A completed job will be marked in a file (with a timestamp in the file), so the loop will not try to work on those. Note, however, that the lock is taken first before checking the file, to avoid race conditions. (Always lock first!)
 
Old 01-17-2012, 01:21 AM   #4
divyashree
Senior Member
 
Registered: Apr 2007
Location: Bangalore, India
Distribution: RHEL,SuSE,CentOS,Fedora,Ubuntu
Posts: 1,386

Original Poster
Rep: Reputation: 135Reputation: 135
Quote:
Originally Posted by eaberry View Post
ConvertToPDF ${project} 1>> $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log 2>&1 &

The ampersand at the end of this command causes it to run in background. Therefore the script continues without waiting for it to finish. If you remove the "&" then each job will finish before the script continues with the next.

Thanks for replying , but if I am removing the ampersand then the job is finished for the 1st project and not going for the next .
 
Old 01-17-2012, 09:10 AM   #5
Reuti
Senior Member
 
Registered: Dec 2004
Location: Marburg, Germany
Distribution: openSUSE 15.2
Posts: 1,339

Rep: Reputation: 260Reputation: 260Reputation: 260
Going back to the beginning, I wonder whether the entries in $Project_List are unique, or whether you first have to process them to get an unique list.
 
Old 01-18-2012, 05:19 AM   #6
divyashree
Senior Member
 
Registered: Apr 2007
Location: Bangalore, India
Distribution: RHEL,SuSE,CentOS,Fedora,Ubuntu
Posts: 1,386

Original Poster
Rep: Reputation: 135Reputation: 135
Quote:
Originally Posted by Reuti View Post
Going back to the beginning, I wonder whether the entries in $Project_List are unique, or whether you first have to process them to get an unique list.
I exported the list like this:

PHP Code:
export Project_List="3750 3770 3190 3790 3990 3995 1111" 
 
Old 01-18-2012, 05:29 AM   #7
Reuti
Senior Member
 
Registered: Dec 2004
Location: Marburg, Germany
Distribution: openSUSE 15.2
Posts: 1,339

Rep: Reputation: 260Reputation: 260Reputation: 260
Then I wonder why your lock file is already in place beforehand in some cases - did you abort any former run? A simple loop should do without any lock file at all:
Code:
for project in $Project_List
do    
        echo "Info : Now $0 is stared for $project!!"  > $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log
        echo ConvertToPDF ${project} 1>> $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log 2>&1       
done
outputs the correct entries in your script?
 
Old 01-18-2012, 05:34 AM   #8
divyashree
Senior Member
 
Registered: Apr 2007
Location: Bangalore, India
Distribution: RHEL,SuSE,CentOS,Fedora,Ubuntu
Posts: 1,386

Original Poster
Rep: Reputation: 135Reputation: 135
Quote:
Originally Posted by Reuti View Post
Then I wonder why your lock file is already in place beforehand in some cases - did you abort any former run? A simple loop should do without any lock file at all:
Code:
for project in $Project_List
do    
        echo "Info : Now $0 is stared for $project!!"  > $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log
        echo ConvertToPDF ${project} 1>> $TESS_BASE_LOGDB/Logs/PDF_catv5_status_${HOSTNAME}_${Today_time}_${project}.log 2>&1       
done
outputs the correct entries in your script?
No no .. this script is monitorerd by another for any error in running project and kill the code for the project, so that it will not get hanged and continue for next.

But while i used &(ampercand) even the code line is killed the code catch the next one without being stopped.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Preventing multiple instances of a shell script from running concurrently Dave Lerner Programming 17 04-21-2020 12:53 PM
[SOLVED] Running a python script on multiple files ManyouRisms *BSD 7 06-01-2011 04:49 AM
How to make multiple threads of currently running shell script? aarsh Linux - Newbie 10 04-21-2010 07:51 PM
Running multiple commands remotely via SSH in a script gimpy530 Linux - General 4 12-19-2009 10:22 PM
Avoid script running multiple times by file lock edenCC Linux - Newbie 6 10-23-2009 06:21 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 01:23 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration