Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I want to group data in such a way that generation of each file should not contain more than 3 ID's. I have wrote a python script and it's working but the file is too huge, so it is taking a lot of time.
I have tried writing some awk commands but cannot figure it out how to put conditions in the command. (I am trying to use the command in loop for the creation of multiple files).
In python code looks like this:
Code:
infile=open("Input.txt","r")
outfile=open("Output1.txt","w")
list_value=[] # Storing the values of 1st column
file=2 # Naming the file
value="" # 1st column value
for line in infile:
value=line.split()[0]
if value in list_value:
outfile.write(line)
else:
list_value.append(value)
if (len(list_value)) < 4:
outfile.write(line)
elif (len(list_value))==4:
outfile=open("Output"+str(file)+".txt","w")
outfile.write(line)
file=file+1
list_value=[]
list_value.append(value)
infile.close()
outfile.close()
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
Distribution: Debian, Red Hat, Slackware, Fedora, Ubuntu
Posts: 13,596
Rep:
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.
--jeremy
the bigger a plate of food the longer it will take to eat all of it.
is a metaphoric explanation as to why it takes longer to process bigger files then smaller ones. if you need me to PM that to you I will.
in reference to this in post #1
"it's working but the file is too huge, so it is taking a lot of time."
and this in post #2
I would say awk will not be faster, so better to try to speed up using python.
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
Yeah, closing the outfiles should be done, I missed that line and edited my question again.
so you can check how long does it take. And you can check also your python script, theoretically that should be almost the same. You can only improve your script if that was definitely longer.
I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.
I'm just curious what text editor do you use to open the 600GB file?
I want to group data in such a way that generation of each file should not contain more than 3 ID's.
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.
Last edited by JJJCR; 06-22-2017 at 04:56 AM.
Reason: edit
I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.
I'm just curious what text editor do you use to open the 600GB file?
This file is a result of sequence processing. I want to use this file for further processes. That's why I need grouping and chunking of this file into multiple files each of almost 50 GB for the further process. After that, I can combine the output.
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.
The ID's are only 3 in the first file (1,2 and 3) but multiple values for each ID (which should be included).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.