LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 11-22-2013, 02:22 PM   #1
mattca
Member
 
Registered: Jan 2009
Distribution: Slackware 14.1
Posts: 333

Rep: Reputation: 56
How to process a very large file


Hi all, I have a large dataset in a single json file that I need to process somehow. This file is bigger than my available RAM (although smaller than total RAM + swap space). I had written a simple python script to process it that I had tested with smaller datasets, but obviously I run out of memory when I try to open the file I actually need to process.

What's the best way to proceed? I'm not limited to using python, I can use whatever tool suits the job.

If what I need to do with it matters, I basically need to turn something like this:
Code:
[
    {"fruit": "apple", "vegetable": "pea"},
    {"fruit": "banana", "vegetable": "carrot"},
    {"fruit": "grapefruit", "vegetable": "pea"},
    {"fruit": "apple", "vegetable": "celery"}
]
into something like this:
Code:
{
    "pea": ["apple", "grapefruit"],
    "carrot": ["banana"],
    "celery": ["apple"]
}
Basically, I have a list of rows, and need to create an object indexed by "vegetable", which contains a list of the "fruits" associated with that vegetable.

Does that make sense?

Anyway, what's the best way to approach this? The file is 22G, and I have 8G RAM and 16G swap.

Edit: if it helps, here's the python script I wrote
Code:
#!/usr/bin/python

import sys
import json

if len(sys.argv) != 3:
    print "Usage:"
    print "python process-results.py <input file> <output file>"
    sys.exit()

inputfilename = sys.argv[1]
outputfilename = sys.argv[2]

with open(inputfilename) as infile:
    items = json.load(infile)
    filtered_items = {}

    for item in items:
        vegetable = item["vegetable"][0]

        if vegetable not in filtered_items:
            filtered_items[vegetable] = []

        filtered_items[vegetable].append(item["fruit"])

    infile.close()

with open(outputfilename, "w") as outfile:
    json.dump(filtered_items, outfile, indent=4)
    outfile.close()

Last edited by mattca; 11-22-2013 at 02:27 PM.
 
Old 11-22-2013, 02:31 PM   #2
danielbmartin
Senior Member
 
Registered: Apr 2010
Location: Apex, NC, USA
Distribution: Mint 17.3
Posts: 1,881

Rep: Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660Reputation: 660
Consider using the split command to break your huge input file into manageable subsets, and process each individually.

Daniel B. Martin
 
Old 11-22-2013, 02:49 PM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,128

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Use ijson instead ?.
 
Old 11-22-2013, 02:58 PM   #4
Guttorm
Senior Member
 
Registered: Dec 2003
Location: Trondheim, Norway
Distribution: Debian and Ubuntu
Posts: 1,453

Rep: Reputation: 447Reputation: 447Reputation: 447Reputation: 447Reputation: 447
Split would have to be merged, so it would need a bit more coding. Why does it read it all into an array and then iterate? If you read the file line by line as text, remove the last comma, it's json. Parse each line and add to filtered_items, like you have code for already.
 
Old 11-23-2013, 02:36 AM   #5
NevemTeve
Senior Member
 
Registered: Oct 2011
Location: Budapest
Distribution: Debian/GNU/Linux, AIX
Posts: 4,863
Blog Entries: 1

Rep: Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869Reputation: 1869
sort(1) is your friends

Input:
Code:
[
    {"fruit": "apple", "vegetable": "pea"},
    {"fruit": "banana", "vegetable": "carrot"},
    {"fruit": "grapefruit", "vegetable": "pea"},
    {"fruit": "apple", "vegetable": "celery"}
]
output of step#1 (python, perl, awk, bash etc):
Code:
"pea" "apple"
"carrot" "banana"
"pea" "grapefruit"
"celery" "apple"
output of sort:
Code:
"carrot" "banana"
"celery" "apple"
"pea" "apple"
"pea" "grapefruit"
output of step#3 (python, perl, awk, bash etc):
Code:
"carrot": "banana"
"celery": "apple"
"pea": "apple", "grapefruit"
 
Old 11-25-2013, 11:37 AM   #6
dugan
LQ Guru
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 11,225

Rep: Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320
Quote:
Originally Posted by mattca View Post
Anyway, what's the best way to approach this? The file is 22G, and I have 8G RAM and 16G swap.
Honestly, the best way to approach this is to find a computer with 32GB RAM to run it on. It can be a physical one, or you can rent CPU time on a cloud.

Okay...

If that's not an option, then I would attempt the following: Read the file line-by-line once to determine the keys ("carrot", "celery", "pea" in the example), create a database with a column header for each key, reread the input file line by line and update the database for each line, and then *separately* write the contents of each database column as a key in the output file. That way, only the contents of one column/key will ever need to be in memory at a time.

Last edited by dugan; 11-25-2013 at 11:45 AM.
 
Old 11-29-2013, 05:25 AM   #7
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I'd consider a tied hash in Perl; basically means you get ability to x-ref keys+values as you go line-by-line, but the hash is actually stored on disk, not in mem
 
Old 11-29-2013, 05:48 AM   #8
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,849

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
I would try to read file only line by line, replace strings with numbers (or sorter strings), so 1 is apple, 2 is grapefruit ...
You will only need to create a simple mapping table and handle those numbers. You can use arrays to store list of numbers (vegetables) and another array (of fruits) to store those arrays. Sorting can also help a lot as it was mentioned. Just remember, you need not keep all the file in memory.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] sort large data in large file - commands smithy2010 Linux - Newbie 7 02-03-2013 09:01 AM
Shell Session Crashing when cat'ing a large file, tar'ing a large file, etc. newmanium2001 Linux - General 3 12-22-2012 10:32 PM
[quick] trying to split a large file but linux says it's to large steve51184 Linux - General 16 05-06-2008 07:40 AM
File too large (script is too large to execute) DeuceNegative Linux - General 1 05-09-2007 12:10 AM
help ~~~ apache forks a large number of process and slow down tclwp Linux - Software 2 03-06-2005 08:12 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 08:14 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration