[SOLVED] Read large text files (~10GB), parse for columns, output
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Read large text files (~10GB), parse for columns, output
Hello, world.
The Goal: Read in ASCII text files, parse out specific columns, send to standard out.
I'm currently using a simple awk {print $1, ...} script to accomplish this. It's all fine and good, but the files I'm reading in are massive (10GB is not uncommon in our environment) and I speculate (hope) that a C or C++ application can parse these files faster than awk.
My C-fu is weak at best (featured below is my "101" level C code mushed together after lots of Google searches - ha) and it's actually slower than awk. If it matters, I have access to some very powerful hardware (64 bit quad core Xeon, 6GB of RAM).
What alternatives are there to fopen/etc. for reading in large files and parsing? Thanks in advance.
Code:
#include <stdio.h>
#include <string.h>
int main( int argc, char *argv[] )
{
/* No file supplied? */
if ( argc == 1 )
{
puts( "\nYou must supply a file to parse\n" );
return 1;
}
/* Open the file, read-only */
FILE *file = fopen( argv[1], "r" );
/* If the file exists... */
if ( file != NULL )
{
char line[256];
char del[] = " ";
/* While we can read a line from the file... */
while ( fgets( line, sizeof line, file ) != NULL )
{
/* Convert each line in to tokens */
char *result = NULL;
result = strtok( line, del );
int tkn = 1;
/* "Foreach" token... */
while( result != NULL )
{
/* If tkn matches our list, then print */
/* $1, $2, $4, $6, $11, $12, $13 */
if (
tkn == 1 || tkn == 2 || tkn == 3 ||
tkn == 4 || tkn == 6 || tkn == 11 ||
tkn == 12 || tkn == 13
)
{
printf( "%s ", result );
}
tkn++;
result = strtok( NULL, del );
}
}
fclose( file );
} else {
printf( "%s", argv[1] );
}
return 0;
}
What in the world could you do to a text file to make it 10GB? Wow.
Maybe I'm missing something, but if the main issue is simply that you can't load the whole file into memory at once, any solution would work if it read line by line. There are good, straightforward ways to do this in many scripting languages (Perl or Python, for example), and a higher level language would allow you to leverage very powerful built-in string techniques. I guess what I'm saying is that I don't know if C would offer a significant speed increase. The main speed issue is just going through all the lines, it seems, rather than any lower level algorithm. I'll be curious to hear if others know better.
The Goal: Read in ASCII text files, parse out specific columns, send to standard out.
I'm currently using a simple awk {print $1, ...} script to accomplish this. It's all fine and good, but the files I'm reading in are massive (10GB is not uncommon in our environment) and I speculate (hope) that a C or C++ application can parse these files faster than awk.
My C-fu is weak at best (featured below is my "101" level C code mushed together after lots of Google searches - ha) and it's actually slower than awk. If it matters, I have access to some very powerful hardware (64 bit quad core Xeon, 6GB of RAM).
What alternatives are there to fopen/etc. for reading in large files and parsing? Thanks in advance.
Code:
#include <stdio.h>
#include <string.h>
int main( int argc, char *argv[] )
{
/* No file supplied? */
if ( argc == 1 )
{
puts( "\nYou must supply a file to parse\n" );
return 1;
}
/* Open the file, read-only */
FILE *file = fopen( argv[1], "r" );
/* If the file exists... */
if ( file != NULL )
{
char line[256];
char del[] = " ";
/* While we can read a line from the file... */
while ( fgets( line, sizeof line, file ) != NULL )
{
/* Convert each line in to tokens */
char *result = NULL;
result = strtok( line, del );
int tkn = 1;
/* "Foreach" token... */
while( result != NULL )
{
/* If tkn matches our list, then print */
/* $1, $2, $4, $6, $11, $12, $13 */
if (
tkn == 1 || tkn == 2 || tkn == 3 ||
tkn == 4 || tkn == 6 || tkn == 11 ||
tkn == 12 || tkn == 13
)
{
printf( "%s ", result );
}
tkn++;
result = strtok( NULL, del );
}
}
fclose( file );
} else {
printf( "%s", argv[1] );
}
return 0;
}
I don't think you grounds to believe that your "C" code will be faster than awk or Perl.
Start a loop which takes one line at a time from the file, saves state and ends when you hit EOF.
Check each line inside the loop for a match against a number of expressions.
Print the line if you hit a match and move onto the next line in the file. (I assume you want to print when you hit the first match and then skip the rest of the tests. No reason to keep testing the same line after you've hit a match.)
What I think Sergei is saying, and I am certainly saying, is that you don't have any special reason to think C will be significantly faster than Perl at doing those things. In addition, if developer time matters to you, then Perl is potentially faster (to develop) since it's built to handle strings, lines and regular expressions. Edit: That said, maybe I'm missing something obvious. I do that all the damn time.
Last edited by Telemachos; 04-06-2009 at 01:45 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.