Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
11-08-2009, 05:30 PM
|
#1
|
Member
Registered: Dec 2007
Location: Tuscaloosa, AL
Posts: 126
Rep:
|
grep a file for any interger larger than X and assign largest value to a variable
Hi all,
I have a directory full of amino acid sequence files formatted like this:
Code:
>14424|LGIG|134818
MEIVTTDSVSIVETPTWKRPLELLNS
>14424|NVEC|199771
MADEGSGFSDCSSRQYLNPLQLFEEK
>14424|CAP|112268
MTTYKRPLELLEGDSLVKICAPMVRY
>14424|HROB|107587
MSVEENKFKNSMELFESKDVVKICAP
>14424|Euprymna_scolopes_pep.fas|ESCO_227709683_c_s
QKMEHAGASWVTVHGRTPEQRGEPVN
I want to write a script that will go through and count the number of occurrences of particular strings associated with the taxon names and if there are more than X sequences for a particular taxon, delete the entire file.
Here is the code I have so far using X=1 for this example:
Code:
for FileName in *.fa
do
grep -c ESCO_ $FileName >> taxon_count.txt
grep -c CAP| $FileName >> taxon_count.txt
grep -c LGIG| $FileName >> taxon_count.txt
grep -c NVEC| $FileName >> taxon_count.txt
grep -c HROB| $FileName >> taxon_count.txt
#??? somehow grep for any integer larger than 1 in taxon_count.txt and assign the largest value recovered in the variable $sequences_per_taxon
if [ "$sequences_per_taxon" -gt 1 ] ; then
printf "More than one sequence per taxon in $FileName"
mv $FileName ./rejected/
fi
done
Can anyone help me with the missing step?
Thanks!
Kevin
|
|
|
11-08-2009, 05:41 PM
|
#2
|
Moderator
Registered: Apr 2002
Location: earth
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
|
Help us :}
We're not biochemists (most people here, anyway). Which part of the
strings above is your taxon. And as you seem to be slapping all matches
into the same taxon_count.txt, how do you want to go about determining
which files to actually delete?
Cheers,
Tink
|
|
|
11-08-2009, 05:56 PM
|
#3
|
Senior Member
Registered: Aug 2006
Posts: 2,697
|
use awk for parsing files. Use arrays to store your count. Its also not clear which field is your "taxon", so i assume it is. A sample:
2nd field where line starts with ">"
Code:
awk -F"|" '/^>/{
taxon[$2]++
}
END{
for(o in taxon){
print o,taxon[o]
}
}
' file
|
|
|
11-08-2009, 08:38 PM
|
#4
|
Member
Registered: Dec 2007
Location: Tuscaloosa, AL
Posts: 126
Original Poster
Rep:
|
Thanks for the replies.
Tinkster, I forgot to include a line deleting the taxon_count.txt file after each .fa file is processed. The taxon name abbreviations for the sequences I pasted below would be LGIG, NVEC, CAP, HROB, and ESCO respectively. The problem is the 'header line' (the line containing the greater-than sign) varies in format from taxon to taxon but maybe the best thing to do is to reformat them all before processing them with these scripts.
ghostdog74, thanks for your help. My brain is fried for tonight but tomorrow I am going to try to implement your awk suggestion into the script I have now. I may have more questions.
Thanks again to both of you.
Kevin
|
|
|
11-09-2009, 10:56 AM
|
#5
|
Member
Registered: Dec 2007
Location: Tuscaloosa, AL
Posts: 126
Original Poster
Rep:
|
The problem I ran into is that all my input files are formatted differently so instead of format each one to have the taxon abbreviation in the same field (for awk), I stuck with a grep-based approach. This greps taxon abbreviations and gets rid of input files that fewer than 6 different taxa. I know its not very pretty or versatile but it gets the job done.
Code:
for FileName in *.fa
do
grep -c NEOM_ $FileName >> $FileName\.taxon_count
grep -c WARG_ $FileName >> $FileName\.taxon_count
grep -c CNIT_ $FileName >> $FileName\.taxon_count
grep -c SVEN_ $FileName >> $FileName\.taxon_count
grep -c CAPI_ $FileName >> $FileName\.taxon_count
#... others removed for brevity
grep -c CAP\| $FileName >> $FileName\.taxon_count
grep -c LGIG\| $FileName >> $FileName\.taxon_count
grep -c NVEC\| $FileName >> $FileName\.taxon_count
grep -c HROB\| $FileName >> $FileName\.taxon_count
taxon_count=`grep -v 0 $FileName\.taxon_count | wc -l`
if [ "$taxon_count" -lt 6 ] ; then
printf "Too few taxa in file $FileName \n"
mv $FileName ./rejected_few_taxa/
fi
rm -rdf *.fa.taxon_count
done
What I am trying to do now is get rid of files that have MORE than X sequences from any one taxon. Could you guys suggest a way I could search my temporary $FileName.taxon_count file for any integer larger than X and perform an operation on that file if one is found?
Thanks again!!
Kevin
|
|
|
11-09-2009, 05:26 PM
|
#6
|
Senior Member
Registered: Aug 2006
Posts: 2,697
|
that's a slow and badly written code. there's no need to use grep so many times on each file. use grep "word1|word2" to grep multiple patterns.
|
|
|
All times are GMT -5. The time now is 09:55 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|