Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
|
06-21-2017, 08:38 AM
|
#1
|
Member
Registered: Apr 2017
Posts: 33
Rep:
|
how to group data in linux
Greetings!
My file looks like this
Code:
ID Num1 Num2
1 1 2
1 2 4
1 3 6
2 4 8
2 5 10
3 6 12
3 7 14
3 8 16
3 9 18
4 10 20
5 11 22
5 12 24
I want to group data in such a way that generation of each file should not contain more than 3 ID's. I have wrote a python script and it's working but the file is too huge, so it is taking a lot of time.
I have tried writing some awk commands but cannot figure it out how to put conditions in the command. (I am trying to use the command in loop for the creation of multiple files).
In python code looks like this:
Code:
infile=open("Input.txt","r")
outfile=open("Output1.txt","w")
list_value=[] # Storing the values of 1st column
file=2 # Naming the file
value="" # 1st column value
for line in infile:
value=line.split()[0]
if value in list_value:
outfile.write(line)
else:
list_value.append(value)
if (len(list_value)) < 4:
outfile.write(line)
elif (len(list_value))==4:
outfile=open("Output"+str(file)+".txt","w")
outfile.write(line)
file=file+1
list_value=[]
list_value.append(value)
infile.close()
outfile.close()
Thanks!
Last edited by Asoo; 06-22-2017 at 02:49 AM.
|
|
|
06-21-2017, 09:23 AM
|
#2
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 22,745
|
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
|
|
|
06-21-2017, 09:26 AM
|
#3
|
LQ Guru
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342
|
the bigger a plate of food the longer it will take to eat all of it.
|
|
|
06-21-2017, 01:23 PM
|
#4
|
root
Registered: Jun 2000
Distribution: Debian, Red Hat, Slackware, Fedora, Ubuntu
Posts: 13,609
|
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.
--jeremy
|
|
|
06-21-2017, 01:43 PM
|
#5
|
LQ Guru
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,521
|
Quote:
Originally Posted by Asoo
I want to group data in such a way that generation of each file should not contain more than 3 ID's.
|
Can you post a quick sample of what you want the output to look like based o n the input data you have shown?
|
|
|
06-21-2017, 01:44 PM
|
#6
|
LQ Guru
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342
|
Quote:
Originally Posted by jeremy
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.
--jeremy
|
the bigger a plate of food the longer it will take to eat all of it.
is a metaphoric explanation as to why it takes longer to process bigger files then smaller ones. if you need me to PM that to you I will.
in reference to this in post #1
"it's working but the file is too huge, so it is taking a lot of time."
and this in post #2
I would say awk will not be faster, so better to try to speed up using python.
Last edited by BW-userx; 06-21-2017 at 02:18 PM.
|
|
|
06-22-2017, 02:47 AM
|
#7
|
Member
Registered: Apr 2017
Posts: 33
Original Poster
Rep:
|
Quote:
Originally Posted by Turbocapitalist
Can you post a quick sample of what you want the output to look like based o n the input data you have shown?
|
Thank you so much for the reply.
1st file should look like this:
Code:
1 1 2
1 2 4
1 3 6
2 4 8
2 5 10
3 6 12
3 7 14
3 8 16
3 9 18
Second file should look like this:
Code:
4 10 20
5 11 22
5 12 24
|
|
|
06-22-2017, 02:49 AM
|
#8
|
Member
Registered: Apr 2017
Posts: 33
Original Poster
Rep:
|
Quote:
Originally Posted by pan64
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
|
Yeah, closing the outfiles should be done, I missed that line and edited my question again.
The file is almost 600 GB.
|
|
|
06-22-2017, 02:58 AM
|
#9
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 22,745
|
I would say you can try:
Code:
time cat file > newname
so you can check how long does it take. And you can check also your python script, theoretically that should be almost the same. You can only improve your script if that was definitely longer.
|
|
1 members found this post helpful.
|
06-22-2017, 03:18 AM
|
#10
|
Member
Registered: Apr 2017
Posts: 33
Original Poster
Rep:
|
Quote:
Originally Posted by pan64
I would say you can try:
Code:
time cat file > newname
|
Okay, I will try this command and will update you with time. I'll also see how to improve script also.
Thank You!
|
|
|
06-22-2017, 04:51 AM
|
#11
|
Senior Member
Registered: Apr 2010
Posts: 2,223
|
600GB is quite a massive file for a text file.
I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.
I'm just curious what text editor do you use to open the 600GB file?
|
|
|
06-22-2017, 04:55 AM
|
#12
|
Senior Member
Registered: Apr 2010
Posts: 2,223
|
Quote:
Originally Posted by Asoo
Thank you so much for the reply.
1st file should look like this:
Code:
1 1 2
1 2 4
1 3 6
2 4 8
2 5 10
3 6 12
3 7 14
3 8 16
3 9 18
Second file should look like this:
Code:
4 10 20
5 11 22
5 12 24
|
Quote:
I want to group data in such a way that generation of each file should not contain more than 3 ID's.
|
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.
Last edited by JJJCR; 06-22-2017 at 04:56 AM.
Reason: edit
|
|
|
06-22-2017, 05:00 AM
|
#13
|
Member
Registered: Apr 2017
Posts: 33
Original Poster
Rep:
|
Quote:
Originally Posted by JJJCR
600GB is quite a massive file for a text file.
I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.
I'm just curious what text editor do you use to open the 600GB file?
|
This file is a result of sequence processing. I want to use this file for further processes. That's why I need grouping and chunking of this file into multiple files each of almost 50 GB for the further process. After that, I can combine the output.
I am using vi as a text editor.
|
|
|
06-22-2017, 05:01 AM
|
#14
|
Member
Registered: Apr 2017
Posts: 33
Original Poster
Rep:
|
Quote:
Originally Posted by JJJCR
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.
|
The ID's are only 3 in the first file (1,2 and 3) but multiple values for each ID (which should be included).
|
|
|
06-22-2017, 05:56 AM
|
#15
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 22,745
|
probably you need to generate several pieces instead of that one big file.
|
|
|
All times are GMT -5. The time now is 11:01 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|