LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 06-21-2017, 08:38 AM   #1
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Rep: Reputation: Disabled
how to group data in linux


Greetings!

My file looks like this

Code:
ID	Num1	Num2
1	1	2
1	2	4
1	3	6
2	4	8
2	5	10
3	6	12
3	7	14
3	8	16
3	9	18
4	10	20
5	11	22
5	12	24
I want to group data in such a way that generation of each file should not contain more than 3 ID's. I have wrote a python script and it's working but the file is too huge, so it is taking a lot of time.

I have tried writing some awk commands but cannot figure it out how to put conditions in the command. (I am trying to use the command in loop for the creation of multiple files).

In python code looks like this:

Code:
infile=open("Input.txt","r")
outfile=open("Output1.txt","w")

list_value=[] # Storing the values of 1st column
file=2 # Naming the file
value="" # 1st column value
for line in infile:
    value=line.split()[0] 
    if value in list_value:
        outfile.write(line)
    else:
        list_value.append(value)
        if (len(list_value)) < 4:
            outfile.write(line)
        elif (len(list_value))==4:
            outfile=open("Output"+str(file)+".txt","w")
            outfile.write(line)
            file=file+1
            list_value=[]
            list_value.append(value)
infile.close()
outfile.close()

Thanks!

Last edited by Asoo; 06-22-2017 at 02:49 AM.
 
Old 06-21-2017, 09:23 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,957

Rep: Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
 
Old 06-21-2017, 09:26 AM   #3
BW-userx
LQ Guru
 
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342

Rep: Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242
the bigger a plate of food the longer it will take to eat all of it.
 
Old 06-21-2017, 01:23 PM   #4
jeremy
root
 
Registered: Jun 2000
Distribution: Debian, Red Hat, Slackware, Fedora, Ubuntu
Posts: 13,602

Rep: Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084Reputation: 4084
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.

--jeremy
 
Old 06-21-2017, 01:43 PM   #5
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,333
Blog Entries: 3

Rep: Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729
Quote:
Originally Posted by Asoo View Post
I want to group data in such a way that generation of each file should not contain more than 3 ID's.
Can you post a quick sample of what you want the output to look like based o n the input data you have shown?
 
Old 06-21-2017, 01:44 PM   #6
BW-userx
LQ Guru
 
Registered: Sep 2013
Location: Somewhere in my head.
Distribution: Slackware (15 current), Slack15, Ubuntu studio, MX Linux, FreeBSD 13.1, WIn10
Posts: 10,342

Rep: Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242Reputation: 2242
Quote:
Originally Posted by jeremy View Post
BW-userx, you seem to have formed a pattern of posting off-topic or non-constructive comments. Please refrain from this moving forward. If you have any questions, feel free to contact a mod or myself directly.

--jeremy
the bigger a plate of food the longer it will take to eat all of it.
is a metaphoric explanation as to why it takes longer to process bigger files then smaller ones. if you need me to PM that to you I will.

in reference to this in post #1
"it's working but the file is too huge, so it is taking a lot of time."
and this in post #2
I would say awk will not be faster, so better to try to speed up using python.

Last edited by BW-userx; 06-21-2017 at 02:18 PM.
 
Old 06-22-2017, 02:47 AM   #7
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by Turbocapitalist View Post
Can you post a quick sample of what you want the output to look like based o n the input data you have shown?
Thank you so much for the reply.

1st file should look like this:

Code:
1   1   2
1   2   4
1   3   6
2   4   8
2   5   10
3   6   12
3   7   14
3   8   16
3   9   18
Second file should look like this:
Code:
4   10  20
5   11  22
5   12  24
 
Old 06-22-2017, 02:49 AM   #8
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by pan64 View Post
I would say awk will not be faster, so better to try to speed up using python.
You never close outfile which may cause problems.
can you tell us something about the size of this file and the running time?
Yeah, closing the outfiles should be done, I missed that line and edited my question again.

The file is almost 600 GB.
 
Old 06-22-2017, 02:58 AM   #9
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,957

Rep: Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329
I would say you can try:
Code:
time cat file > newname
so you can check how long does it take. And you can check also your python script, theoretically that should be almost the same. You can only improve your script if that was definitely longer.
 
1 members found this post helpful.
Old 06-22-2017, 03:18 AM   #10
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by pan64 View Post
I would say you can try:
Code:
time cat file > newname
Okay, I will try this command and will update you with time. I'll also see how to improve script also.

Thank You!
 
Old 06-22-2017, 04:51 AM   #11
JJJCR
Senior Member
 
Registered: Apr 2010
Posts: 2,162

Rep: Reputation: 449Reputation: 449Reputation: 449Reputation: 449Reputation: 449
600GB is quite a massive file for a text file.

I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.

I'm just curious what text editor do you use to open the 600GB file?
 
Old 06-22-2017, 04:55 AM   #12
JJJCR
Senior Member
 
Registered: Apr 2010
Posts: 2,162

Rep: Reputation: 449Reputation: 449Reputation: 449Reputation: 449Reputation: 449
Exclamation

Quote:
Originally Posted by Asoo View Post
Thank you so much for the reply.

1st file should look like this:

Code:
1   1   2
1   2   4
1   3   6
2   4   8
2   5   10
3   6   12
3   7   14
3   8   16
3   9   18
Second file should look like this:
Code:
4   10  20
5   11  22
5   12  24
Quote:
I want to group data in such a way that generation of each file should not contain more than 3 ID's.
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.

Last edited by JJJCR; 06-22-2017 at 04:56 AM. Reason: edit
 
Old 06-22-2017, 05:00 AM   #13
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by JJJCR View Post
600GB is quite a massive file for a text file.

I think the best way to approach it is to split into multiple files, into multiple outputs, then combine the multiple output to a manageable file and come up with a desired output.

I'm just curious what text editor do you use to open the 600GB file?
This file is a result of sequence processing. I want to use this file for further processes. That's why I need grouping and chunking of this file into multiple files each of almost 50 GB for the further process. After that, I can combine the output.

I am using vi as a text editor.
 
Old 06-22-2017, 05:01 AM   #14
Asoo
Member
 
Registered: Apr 2017
Posts: 33

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by JJJCR View Post
In post #1 you mentioned that generation of each file should not contain more than 3 ID's, but in your desired output there are ID's more than 3. I supposed first column is the ID based on post #1.
The ID's are only 3 in the first file (1,2 and 3) but multiple values for each ID (which should be included).
 
Old 06-22-2017, 05:56 AM   #15
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,957

Rep: Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329Reputation: 7329
probably you need to generate several pieces instead of that one big file.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Sort And Group Data sweeny_here Programming 8 02-14-2016 07:38 PM
Writing app data to root volume group - best practice Cerephim Linux - Server 1 12-06-2014 04:41 AM
Viewing data within a Volume Group jeg1972 Linux - Hardware 10 11-21-2009 05:20 AM
Unable to read group repo data-Xen rsub Linux - Software 0 10-01-2008 10:32 PM
Volume group data missing Smthian AIX 1 07-28-2004 02:01 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 02:38 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration