LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (http://www.linuxquestions.org/questions/programming-9/)
-   -   split very large 200mb text file by every N lines (sed/awk fails) (http://www.linuxquestions.org/questions/programming-9/split-very-large-200mb-text-file-by-every-n-lines-sed-awk-fails-744873/)

doug23 08-03-2009 06:05 PM

split very large 200mb text file by every N lines (sed/awk fails)
 
Hi All,

I have a large text file with over a million lines, and I need to split the file by every N lines.

In the end, I need to have three separate files. The first will have every 3 lines starting with the very first line (no header), the second will have every 3 lines starting with the second line, and so on for the third line.

Unfortunately, commands that I have tried so far, including:

$ sed -n '2~3p' somefile
$ awk 'NR%3==0'
$ perl -ne 'print ((0 == $. % 3) ? $_ : "")'

All fail at some point, and start shifting in the sequence after a certain number (probably an integer overflow).

Are there any other commands I should try which should be able to work for the entire file?

Thanks!
Doug

hasienda 08-03-2009 08:04 PM

a simple start
 
Quote:

Originally Posted by doug23 (Post 3630167)
Hi All,

I have a large text file with over a million lines, and I need to split the file by every N lines.

In the end, I need to have three separate files. The first will have every 3 lines starting with the very first line (no header), the second will have every 3 lines starting with the second line, and so on for the third line.

Unfortunately, commands that I have tried so far, including:

$ sed -n '2~3p' somefile
$ awk 'NR%3==0'
$ perl -ne 'print ((0 == $. % 3) ? $_ : "")'

All fail at some point, and start shifting in the sequence after a certain number (probably an integer overflow).

Are there any other commands I should try which should be able to work for the entire file?

Thanks!
Doug

Check out csplit ('man csplit') at the console. Something like
Code:

$> cd /home/user
$> csplit -k --prefix=smallpart ./verybigfile 3 {*}
$> for i in ./smallpart*; do cat $i|head -n 1; done > ./lines147etc.txt
$> for i in ./smallpart*; do cat $i|head -n 2|tail -n 1; done > ./lines258etc.txt
$> for i in ./smallpart*; do cat $i|tail -n 1; done > ./lines369etc.txt

might do it for you.

Caveats: As I tested this, csplit produced the first file ./smallpart00 with only _two_ lines. If there are less the 3 lines in the last small file, the last line will still get added to one or another of the three collected lines files. So you'd need to edit them accordingly. Watch out for line order issues. I've found the limit of max 100 splits is _not_ valid any longer, at least not with for the version distributed with Debians GNU coreutils 6.10-6.

Sergei Steshenko 08-03-2009 08:05 PM

Quote:

Originally Posted by doug23 (Post 3630167)
Hi All,

I have a large text file with over a million lines, and I need to split the file by every N lines.

In the end, I need to have three separate files. The first will have every 3 lines starting with the very first line (no header), the second will have every 3 lines starting with the second line, and so on for the third line.

Unfortunately, commands that I have tried so far, including:

$ sed -n '2~3p' somefile
$ awk 'NR%3==0'
$ perl -ne 'print ((0 == $. % 3) ? $_ : "")'

All fail at some point, and start shifting in the sequence after a certain number (probably an integer overflow).

Are there any other commands I should try which should be able to work for the entire file?

Thanks!
Doug


So, why "probably" ? I.e. why wouldn't you write slightly more code and establish the exact root cause ? You want us to do the debugging ?

ntubski 08-03-2009 09:01 PM

Quote:

Originally Posted by doug23 (Post 3630167)
Hi All,

I have a large text file with over a million lines, and I need to split the file by every N lines.

How many millions? I tried your sed command on a 5-million line file (generated with seq $((5 * 1000 * 1000))), and it seemed to work just fine. That is, after pasting all 3 together again resulted in the same file plus 2 extra blank lines.

doug23 08-04-2009 02:58 PM

Quote:

Originally Posted by Sergei Steshenko (Post 3630248)
So, why "probably" ? I.e. why wouldn't you write slightly more code and establish the exact root cause ? You want us to do the debugging ?

Because Sergei I do not know how to debug linux code.

doug23 08-04-2009 03:05 PM

Quote:

Originally Posted by ntubski (Post 3630275)
How many millions? I tried your sed command on a 5-million line file (generated with seq $((5 * 1000 * 1000))), and it seemed to work just fine. That is, after pasting all 3 together again resulted in the same file plus 2 extra blank lines.

Unfortunately I can guarantee you that sed does not work properly. Each of the three row pairs follows this format:

SomeNumber choice_of_three_words text --->
SomeNumber choice_of_two_words text --->
SomeNumber one_word text --->

Every time I have tried the sed command one of the three result files ends up with a mix of words starting about 22,000 rows down that could never otherwise end up in that file. I have checked the original datafile to ensure that the problem is not in the original file.

hasienda -- are the only lines I need to check the very last ones?

Thank you very much for your help,
Doug

jbo5112 08-04-2009 03:34 PM

bash script
 
This will append to any existing files, takes input from stdin, takes file names as arguments, and doesn't bother checking for correct usage, but it works to split the lines in a round-robin fashion.

Code:

#! /bin/bash

my_file[0]="$1"
my_file[1]="$2"
my_file[2]="$3"
fail=0

while [ "$fail" -lt 1 ]; do
    for ((x=0; x<3; ++x)); do
        if read my_line; then
            echo "$my_line" >> ${my_file[$x]}
        else
            fail=1
        fi
    done
done

./my_script out1 out2 out3 < input

Sergei Steshenko 08-04-2009 04:19 PM

Quote:

Originally Posted by doug23 (Post 3631358)
Because Sergei I do not know how to debug linux code.

Nonsense. Your code has nothing to do with Linux.

For example, modify your Perl one-liner into full blown script and debug it.

Here is a Perl for Windows, for example:

http://strawberryperl.com/ ->
http://strawberryperl.com/releases.html ->
http://strawberryperl.com/download/s...6-portable.zip .

doug23 08-10-2009 06:08 PM

Quote:

Originally Posted by jbo5112 (Post 3631413)
This will append to any existing files, takes input from stdin, takes file names as arguments, and doesn't bother checking for correct usage, but it works to split the lines in a round-robin fashion.

Code:

#! /bin/bash

my_file[0]="$1"
my_file[1]="$2"
my_file[2]="$3"
fail=0

while [ "$fail" -lt 1 ]; do
    for ((x=0; x<3; ++x)); do
        if read my_line; then
            echo "$my_line" >> ${my_file[$x]}
        else
            fail=1
        fi
    done
done

./my_script out1 out2 out3 < input


Worked GREAT! Thank you very much for your help!

Doug


All times are GMT -5. The time now is 09:25 PM.