LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-06-2009, 11:17 AM   #1
vache
LQ Newbie
 
Registered: Apr 2009
Posts: 5

Rep: Reputation: 0
Read large text files (~10GB), parse for columns, output


Hello, world.

The Goal: Read in ASCII text files, parse out specific columns, send to standard out.

I'm currently using a simple awk {print $1, ...} script to accomplish this. It's all fine and good, but the files I'm reading in are massive (10GB is not uncommon in our environment) and I speculate (hope) that a C or C++ application can parse these files faster than awk.

My C-fu is weak at best (featured below is my "101" level C code mushed together after lots of Google searches - ha) and it's actually slower than awk. If it matters, I have access to some very powerful hardware (64 bit quad core Xeon, 6GB of RAM).

What alternatives are there to fopen/etc. for reading in large files and parsing? Thanks in advance.

Code:
#include <stdio.h>
#include <string.h>

int main( int argc, char *argv[] )
{
    /* No file supplied? */
    if ( argc == 1 )
    {
        puts( "\nYou must supply a file to parse\n" );
        return 1;
    }

    /* Open the file, read-only */
    FILE *file = fopen( argv[1], "r" );

    /* If the file exists... */
    if ( file != NULL )
    {
        char line[256];
        char del[] = " ";

        /* While we can read a line from the file... */
        while ( fgets( line, sizeof line, file ) != NULL )
        {
            /* Convert each line in to tokens */
            char *result = NULL;
            result = strtok( line, del );
            int tkn = 1;

            /* "Foreach" token... */
            while( result != NULL )
            {
                /* If tkn matches our list, then print */
                /* $1, $2, $4, $6, $11, $12, $13 */
                if  (
                        tkn == 1 || tkn == 2 || tkn == 3 ||
                        tkn == 4 || tkn == 6 || tkn == 11 ||
                        tkn == 12 || tkn == 13
                    )
                {
                    printf( "%s ", result );
                }
                tkn++;
                result = strtok( NULL, del );
            }
        }
        fclose( file );
    } else {
        printf( "%s", argv[1] );
    }
    return 0;
}
 
Old 04-06-2009, 11:40 AM   #2
Telemachos
Member
 
Registered: May 2007
Distribution: Debian
Posts: 754

Rep: Reputation: 60
What in the world could you do to a text file to make it 10GB? Wow.

Maybe I'm missing something, but if the main issue is simply that you can't load the whole file into memory at once, any solution would work if it read line by line. There are good, straightforward ways to do this in many scripting languages (Perl or Python, for example), and a higher level language would allow you to leverage very powerful built-in string techniques. I guess what I'm saying is that I don't know if C would offer a significant speed increase. The main speed issue is just going through all the lines, it seems, rather than any lower level algorithm. I'll be curious to hear if others know better.
 
Old 04-06-2009, 11:48 AM   #3
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by vache View Post
Hello, world.

The Goal: Read in ASCII text files, parse out specific columns, send to standard out.

I'm currently using a simple awk {print $1, ...} script to accomplish this. It's all fine and good, but the files I'm reading in are massive (10GB is not uncommon in our environment) and I speculate (hope) that a C or C++ application can parse these files faster than awk.

My C-fu is weak at best (featured below is my "101" level C code mushed together after lots of Google searches - ha) and it's actually slower than awk. If it matters, I have access to some very powerful hardware (64 bit quad core Xeon, 6GB of RAM).

What alternatives are there to fopen/etc. for reading in large files and parsing? Thanks in advance.

Code:
#include <stdio.h>
#include <string.h>

int main( int argc, char *argv[] )
{
    /* No file supplied? */
    if ( argc == 1 )
    {
        puts( "\nYou must supply a file to parse\n" );
        return 1;
    }

    /* Open the file, read-only */
    FILE *file = fopen( argv[1], "r" );

    /* If the file exists... */
    if ( file != NULL )
    {
        char line[256];
        char del[] = " ";

        /* While we can read a line from the file... */
        while ( fgets( line, sizeof line, file ) != NULL )
        {
            /* Convert each line in to tokens */
            char *result = NULL;
            result = strtok( line, del );
            int tkn = 1;

            /* "Foreach" token... */
            while( result != NULL )
            {
                /* If tkn matches our list, then print */
                /* $1, $2, $4, $6, $11, $12, $13 */
                if  (
                        tkn == 1 || tkn == 2 || tkn == 3 ||
                        tkn == 4 || tkn == 6 || tkn == 11 ||
                        tkn == 12 || tkn == 13
                    )
                {
                    printf( "%s ", result );
                }
                tkn++;
                result = strtok( NULL, del );
            }
        }
        fclose( file );
    } else {
        printf( "%s", argv[1] );
    }
    return 0;
}
I don't think you grounds to believe that your "C" code will be faster than awk or Perl.
 
Old 04-06-2009, 12:50 PM   #4
vache
LQ Newbie
 
Registered: Apr 2009
Posts: 5

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Telemachos View Post
What in the world could you do to a text file to make it 10GB? Wow.
Infrastructure hardware on a class A network
 
Old 04-06-2009, 12:51 PM   #5
vache
LQ Newbie
 
Registered: Apr 2009
Posts: 5

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Sergei Steshenko View Post
I don't think you grounds to believe that your "C" code will be faster than awk or Perl.
Hmm?
 
Old 04-06-2009, 12:58 PM   #6
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by vache View Post
Hmm?
Because, for example, Perl uses highly optimized regular expressions engine, and at all is highly optimized for text parsing.
 
Old 04-06-2009, 01:32 PM   #7
Telemachos
Member
 
Registered: May 2007
Distribution: Debian
Posts: 754

Rep: Reputation: 60
@ Vache: Think about what you're doing here:
  • Open a file.
  • Start a loop which takes one line at a time from the file, saves state and ends when you hit EOF.
  • Check each line inside the loop for a match against a number of expressions.
  • Print the line if you hit a match and move onto the next line in the file. (I assume you want to print when you hit the first match and then skip the rest of the tests. No reason to keep testing the same line after you've hit a match.)
What I think Sergei is saying, and I am certainly saying, is that you don't have any special reason to think C will be significantly faster than Perl at doing those things. In addition, if developer time matters to you, then Perl is potentially faster (to develop) since it's built to handle strings, lines and regular expressions. Edit: That said, maybe I'm missing something obvious. I do that all the damn time.

Last edited by Telemachos; 04-06-2009 at 01:45 PM.
 
Old 04-06-2009, 01:56 PM   #8
jglands
LQ Newbie
 
Registered: Apr 2009
Posts: 17

Rep: Reputation: 1
Why not use VB?
 
Old 04-06-2009, 02:18 PM   #9
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by jglands View Post
Why not use VB?
The OP mentions awk, so he's most likely on UNIX-like system, and VB is unavailable.
 
Old 04-06-2009, 02:23 PM   #10
jglands
LQ Newbie
 
Registered: Apr 2009
Posts: 17

Rep: Reputation: 1
He should install windows then.
 
Old 04-06-2009, 05:42 PM   #11
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by jglands View Post
He should install windows then.
What for ? And why to pay money for an OS which is definitely not necessary for the task ?
 
Old 04-06-2009, 06:24 PM   #12
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,131

Rep: Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121
I/O is your problem - plain and simple. No matter how fast your CPU is, they all wait (for I/O completion in this case) at the same speed.

Go parse the first Gig of the data (only) - then go do it again. See the difference; that's caching versus real I/O.
 
Old 04-07-2009, 03:02 AM   #13
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by syg00 View Post
I/O is your problem - plain and simple. No matter how fast your CPU is, they all wait (for I/O completion in this case) at the same speed.

Go parse the first Gig of the data (only) - then go do it again. See the difference; that's caching versus real I/O.
That's right, the OS does wonders using smart caching, but for big files one can't fool nature.
 
Old 04-07-2009, 07:37 AM   #14
jglands
LQ Newbie
 
Registered: Apr 2009
Posts: 17

Rep: Reputation: 1
He should use windows, because you get what you pay for. If it's free it must be junk.
 
Old 04-07-2009, 08:39 AM   #15
Telemachos
Member
 
Registered: May 2007
Distribution: Debian
Posts: 754

Rep: Reputation: 60
Before anyone get's all riled up: please don't feed the troll.
 
  


Reply

Tags
ascii, awk, fgets, fopen, parse



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
how can I differentiate two large text files using shell script? Files are like below surya_gadde Linux - Software 1 01-20-2009 02:52 AM
parse input text file and generate output TsanChung Programming 5 07-27-2008 10:23 PM
How to parse text file to a set text column width and output to new text file? jsstevenson Programming 12 04-23-2008 02:36 PM
sed script to read only columns 4 to 6 in output database cranium2004 Programming 10 02-28-2006 07:20 AM
How to parse log files into text view using GLADE shandy^^^ Programming 8 02-07-2006 08:13 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 01:39 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration