LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-04-2014, 07:21 AM   #1
hashbang#!
Member
 
Registered: Aug 2009
Location: soon to be independent Scotland
Distribution: Debian
Posts: 120

Rep: Reputation: 17
[bash] group duplicates and singles


I have a CSV file containing people with addresses.

From this file I need to create 2 output files:
1) people who live in a single person household (one person per address)
2) people who live with other people at the same address

(I do not want to filter out duplicates, just split people into 2 different output files.)

The address is identified by columns 11-16.

What's the best way forward?
 
Old 04-04-2014, 08:19 AM   #2
grail
LQ Guru
 
Registered: Sep 2009
Location: Perth
Distribution: Manjaro
Posts: 10,011

Rep: Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194
What language and what have you tried and where are you stuck?
 
Old 04-04-2014, 08:31 AM   #3
hashbang#!
Member
 
Registered: Aug 2009
Location: soon to be independent Scotland
Distribution: Debian
Posts: 120

Original Poster
Rep: Reputation: 17
bash: grep, sort, awk ...

I suspect it's a case for awk but I am not particularly good at awk scripts.
 
Old 04-04-2014, 08:52 AM   #4
schneidz
LQ Guru
 
Registered: May 2005
Location: boston, usa
Distribution: fedora-35
Posts: 5,313

Rep: Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918Reputation: 918
maybe you can use cut to get columns 11-16.
then in a loop grep -c the address.
if it is equal to 1 then single else multi-dweller.
 
Old 04-04-2014, 09:15 AM   #5
hashbang#!
Member
 
Registered: Aug 2009
Location: soon to be independent Scotland
Distribution: Debian
Posts: 120

Original Poster
Rep: Reputation: 17
I have already isolated the addresses with cut.

It's a shame that grep -c doesn't work in conjunction with -f rturning a count for each line in the filter file!
 
Old 04-04-2014, 10:11 AM   #6
danielbmartin
Senior Member
 
Registered: Apr 2010
Location: Apex, NC, USA
Distribution: Mint 17.3
Posts: 1,881

Rep: Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660
Quote:
Originally Posted by hashbang#! View Post
I have a CSV file containing people with addresses.

From this file I need to create 2 output files:
1) people who live in a single person household (one person per address)
2) people who live with other people at the same address

(I do not want to filter out duplicates, just split people into 2 different output files.)

The address is identified by columns 11-16.

What's the best way forward?
Help us to help you. Provide a sample input file (10-15 lines will do). Construct a sample output file which corresponds to your sample input and post both samples here. With "InFile" and "OutFile" examples we can better understand your needs and also judge if our proposed solution fills those needs.

Your description contains ambiguities, at least to my eye. Reading it literally, both output files contain only personal names, and no addresses. Is that what you really want?

Daniel B. Martin
 
Old 04-04-2014, 10:28 AM   #7
grail
LQ Guru
 
Registered: Sep 2009
Location: Perth
Distribution: Manjaro
Posts: 10,011

Rep: Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194Reputation: 3194
So you mention:
Quote:
bash: grep, sort, awk ...

I suspect it's a case for awk but I am not particularly good at awk scripts.
And then say:
Quote:
I have already isolated the addresses with cut.

It's a shame that grep -c doesn't work in conjunction with -f rturning a count for each line in the filter file!
Reading these it is hard to workout if you have made an attempt or not or are stuck but don't want to show where?


Again, provide examples, as suggested by Daniel, but then also show what code you have already done and where you are stuck?

This is not your first post, so one would think you know how to ask questions and what the usual response is from the people on the forums when you don't
seem to show any attempt.
 
Old 04-04-2014, 05:14 PM   #8
danielbmartin
Senior Member
 
Registered: Apr 2010
Location: Apex, NC, USA
Distribution: Mint 17.3
Posts: 1,881

Rep: Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660
With this CSV InFile ...
Code:
Jackson Browne,13 Park Ave
Hank Williams,713 Violet Ave
Lou Reed,5 Lynn Rd
Gene Vincent,5 Lynn Rd
Ray Charles,13 Park Ave
Tom Petty,3333 Morningside Dr
Carly Rae Jepsen,9 Chestnut Ln
Willie Nelson,713 Violet Ave
Bob Dylan,713 Violet Ave
... this code ...
Code:
awk -F, -v W1=$Singles -v W2=$Cohabs  \
    '{a[$2]=a[$2] ", "$1; c[$2]++}
  END{for (j in a) {OutRec=j" => "substr(a[j],3);
     if (c[j]==1) print OutRec >W1
             else print OutRec >W2}}' $InFile 
echo "Singles ..."; cat  $Singles; echo "End Of File"
echo "Cohabs ...";  cat  $Cohabs;  echo "End Of File"
... produced this result ...
Code:
Singles ...
9 Chestnut Ln => Carly Rae Jepsen
3333 Morningside Dr => Tom Petty
End Of File
Cohabs ...
5 Lynn Rd => Lou Reed, Gene Vincent
713 Violet Ave => Hank Williams, Willie Nelson, Bob Dylan
13 Park Ave => Jackson Browne, Ray Charles
End Of File
Daniel B. Martin
 
1 members found this post helpful.
Old 04-21-2014, 10:34 AM   #9
hashbang#!
Member
 
Registered: Aug 2009
Location: soon to be independent Scotland
Distribution: Debian
Posts: 120

Original Poster
Rep: Reputation: 17
Thank you Daniel. That was what I'd been looking for.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Chemistry problem: Identify duplicates and non-duplicates within TWO sdf files robertselwyne Programming 5 12-09-2011 06:20 AM
[SOLVED] Bash: combine arrays & delete duplicates jomann Programming 5 05-24-2011 05:42 AM
Bash output with timestamp removing duplicates sociopathichaze Programming 3 11-21-2010 05:10 AM
[SOLVED] [bash] sort string and discard duplicates hashbang#! Programming 10 08-21-2009 06:17 AM
BASH out duplicates from multiple text files smudge|lala Linux - General 3 09-24-2008 07:51 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 07:10 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration