ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi all, I have a large dataset in a single json file that I need to process somehow. This file is bigger than my available RAM (although smaller than total RAM + swap space). I had written a simple python script to process it that I had tested with smaller datasets, but obviously I run out of memory when I try to open the file I actually need to process.
What's the best way to proceed? I'm not limited to using python, I can use whatever tool suits the job.
If what I need to do with it matters, I basically need to turn something like this:
Basically, I have a list of rows, and need to create an object indexed by "vegetable", which contains a list of the "fruits" associated with that vegetable.
Does that make sense?
Anyway, what's the best way to approach this? The file is 22G, and I have 8G RAM and 16G swap.
Edit: if it helps, here's the python script I wrote
Code:
#!/usr/bin/python
import sys
import json
if len(sys.argv) != 3:
print "Usage:"
print "python process-results.py <input file> <output file>"
sys.exit()
inputfilename = sys.argv[1]
outputfilename = sys.argv[2]
with open(inputfilename) as infile:
items = json.load(infile)
filtered_items = {}
for item in items:
vegetable = item["vegetable"][0]
if vegetable not in filtered_items:
filtered_items[vegetable] = []
filtered_items[vegetable].append(item["fruit"])
infile.close()
with open(outputfilename, "w") as outfile:
json.dump(filtered_items, outfile, indent=4)
outfile.close()
Split would have to be merged, so it would need a bit more coding. Why does it read it all into an array and then iterate? If you read the file line by line as text, remove the last comma, it's json. Parse each line and add to filtered_items, like you have code for already.
Anyway, what's the best way to approach this? The file is 22G, and I have 8G RAM and 16G swap.
Honestly, the best way to approach this is to find a computer with 32GB RAM to run it on. It can be a physical one, or you can rent CPU time on a cloud.
Okay...
If that's not an option, then I would attempt the following: Read the file line-by-line once to determine the keys ("carrot", "celery", "pea" in the example), create a database with a column header for each key, reread the input file line by line and update the database for each line, and then *separately* write the contents of each database column as a key in the output file. That way, only the contents of one column/key will ever need to be in memory at a time.
I'd consider a tied hash in Perl; basically means you get ability to x-ref keys+values as you go line-by-line, but the hash is actually stored on disk, not in mem
I would try to read file only line by line, replace strings with numbers (or sorter strings), so 1 is apple, 2 is grapefruit ...
You will only need to create a simple mapping table and handle those numbers. You can use arrays to store list of numbers (vegetables) and another array (of fruits) to store those arrays. Sorting can also help a lot as it was mentioned. Just remember, you need not keep all the file in memory.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.