"stictching" together files with changing linking values
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
"stictching" together files with changing linking values
Hi guys,
I have three data files that I need to "stitch" together with "linking values" that change between files.
Here's what I mean by "stitching" and "linking value" and maybe there's a coding term I should use instead, idk?
Beginning with fileA, I know that I want all the data in all the files that is associated with the "knownName" parameter in column 2, in this example the value is 6102. fileA also tells me that the value of 6102 is linked to "objectIdentifier_" value of 17447 in column 1.
Now in fileB column 1 the "objectIdentifier_" single value of 17447 is linked to many different "dNumCount_" values listed in the 2nd column, 2764201, 2764379, 2765099, 2818365 (in a full length file there would be hundreds of them).
Now in fileC those same "dNumCount_" values of fileB show up in column 1 but they are called "DcrldNumber".
What I need to do is to "stich together" fileA->fileB->fileC-> and create an outputfile having whatever columns of data that I need to pull from from fileA and fileB and fileC filtering out the rest, probably using a using a print statement, idk?
so far I have a tool (courtesy of grail) that begins the stitching process but it won't go further than fileB because the tool doesn't know how to handle the multiple occurances listed for 17447
these example files are all comma deliminated, but some of the files are space deliminated, and these example files have header lines, but some of the files will not
Here's what I mean by "stitching" and "linking value" and maybe there's a coding term I should use instead, idk?
What you call "stiching" is known as a "Join" in SQL, or Relational Algebra. The "linking value" would be "Join Field" or Foreign Key (though Foreign Key implies a bit more than Join Field).
There is a coreutils command call join, but it requires the data be sorted on the join field.
Quote:
What I need to do is to "stich together" fileA->fileB->fileC-> and create a fileD having whatever columns of data that I want to pull from from fileA and fileB and fileC using a print statement
so far I have a tool (courtesy of grail) that begins the stitching process but it won't go further than fileB because the tool doesn't know how to handle the multiple occurances listed for 17447
It's a bit unclear how you want to handle multiple occurrences? Anyway, if you could post some sample data in text format, and sample output it would a lot easier to help.
If I was going to have to do that with flat files, I'd use Perl to map each file as a hash so that the key values are the keys to each file hash.
To be honest though, this is a classic problem to solve using a DB+SQL.
ntubski, I posted the example data set in the first post, I guess I should have done that first - whoooops.
Chris, I read about databases and SQL statements on line last night, and you are right that that would be a great way to solve the proble, too bad I don't have that option.
in my girlish thinking of the psuedo code would be, but there's probably a better way???
1. user inputs "knownName" value, either written in the code or typed in at the keyboard
2. read fileA once to find the "objectIdentifier_" which will be join_key(k) in this case 17447
3. read fileB looking for join_key(k) in order to find "dNumCount_" which will be join_key(j) in these example files the first occurence is 2818365
4. read fileC looking for "DcrIdNumber" join_key(j)
5. and print to an outputfile the lines of data from fileB and fileC
6. loop through fileB and fileC for the j+1 join_key, then j+2 join_key, then j+3 join_key, etc.etc. writing those lines of data to the outputfile until end of fileB.
ntbuski, maybe fileB and fileC could be sorted on join_key(k) and then use the "join" command, but I'm not sure because they are diffenert lengths idk?
Tabby
Last edited by atjurhs; 11-21-2012 at 09:50 AM.
Reason: added idea of sorting and join
Chris, I read about databases and SQL statements on line last night, and you are right that that would be a great way to solve the proble, too bad I don't have that option.
Actually...
Code:
#!/bin/sh
name=6102
sqlite3 <<EOF
-- The fields that will be compared need to declared as NUMERIC otherwise the extra whitespace gets in the way.
CREATE TABLE a(objectIdentifier_ NUMERIC, knownName NUMERIC);
CREATE TABLE b(objectIdentifier_ NUMERIC, dNumCount_ NUMERIC, completed_, confid_);
CREATE TABLE c(DcrIdNumber NUMERIC , stringType , RdB, SdB, uVal, count , use, etc);
.mode csv
.import fileA.dat a
.import fileB.dat b
.import fileC.dat c
.mode list
SELECT a.objectIdentifier_, DcrIdNumber, stringType, RdB, SdB , uVal, count , use, confid_
FROM a, b, c
WHERE (a._rowid_ == 1 AND b._rowid_ == 1 AND c._rowid_ == 1) OR -- first line contains header names
(a.knownName == $name AND
a.objectIdentifier_ == b.objectIdentifier_ AND
b.dNumCount_ == c.DcrIdNumber);
EOF
I can run grail's awk script once to find the correlation from my known 6102 to the first join key 17447 and
I kinda think that if there is a way for grail's awk script to "loop over" the many occrances of 17447 in fileB so then it will know the new join key(S)that relate to 17447 in fileC it could join them one by one to their match in fileC, yes, idk, maybe?
which is obviously not what I was hoping for, so I looked in the bins of my system and I didn't find an sqlite3 executeable.
If you don't have sqlite3 installed I would expect to see something like
Code:
./ntubski.sh: 3: sqlite3: not found
Since you got some output, it seems like you do have sqlite3 installed, but maybe the format of the .csv files is a bit inconsistent. That seems like the main barrier to handling problems like this with sqlite3: it doesn't have a good builtin csv parser.
The sort and join commands don't really handle full csv either; I had to filter out the whitespace around the commas to make things work (it's also too bad sort lacks a --header option):
Good morning ntubski, I ran your script below, well sort of:
Quote:
Originally Posted by ntubski
The sort and join commands don't really handle full csv either; I had to filter out the whitespace around the commas to make things work (it's also too bad sort lacks a --header option):
ran the modified scirpt ./ntubski_mod.sh and it ran, well sort of, it didn't error out, but there was also no output file?
so I removed all the commas spaces * , * and replaced them with just a single space (I wrote a script that does this very quickly), so that now all the files have this format:
my machine does not recognize the "header" option of join, it came back with:
Code:
join: unrecognized option `--header'
Hmm, do you know what version of join you have? Does join --version give you anything?
Quote:
and removed the header lines from the files.
ran the modified scirpt ./ntubski_mod.sh and it ran, well sort of, it didn't error out, but there was also no output file?
You would also have modify the other parts of the code because some of it is there to remove the first line so that it doesn't go through sort, but then add it back so it does go through join.
Just to clarify though, the script I wrote just puts the result to standard out, not to a file. By "no output file", do you mean no output whatsoever?
Quote:
so I removed all the commas spaces * , * and replaced them with just a single space (I wrote a script that does this very quickly), so that now all the files have this format:
That might be a better approach, but you also have to change the parts in the script that were told to use a comma as the delimiter: the -t option for join and sort, and the "-F," for awk. Things look quite a bit simpler without having to account for headers and commas:
it would be really helpful if there was a switch or case statement (but I don't see that sort of cmnd in awk) that would enable it to handle a header line if one occurred in the file
I also added a redirect to an output file that now I can do other stuff with
many thanks!
btw, the join --version gave me back GNU coreutils 5.97
I'd also add that if you will be doing a lot of data processing, install/learn some sort of SQL tool and/or learn Perl.
Data munging is one of Perl's strengths.
it would be really helpful if there was a switch or case statement (but I don't see that sort of cmnd in awk) that would enable it to handle a header line if one occurred in the file
awk has a switch statement, but I don't think you would need it just to remove a header.
Quote:
btw, the join --version gave me back GNU coreutils 5.97
Ah, the --header option was added in version 8.5: NEWS item.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.