LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Converting a file with Rows and Columns to just Columns (https://www.linuxquestions.org/questions/linux-newbie-8/converting-a-file-with-rows-and-columns-to-just-columns-4175497043/)

mphillips67 03-04-2014 06:40 PM

Converting a file with Rows and Columns to just Columns
 
I have a file with entries that look like this:

Pos
148

A 0
C 0
G 0.081985
T 0.918015
207

A 0.021697
C 0.978303
G 0
T 0

I need to convert this to something that looks like:

Pos A C G T
148 0 0 0.081985 0.918015
207 0.021697 0.978303 0 0


So, my "Pos" entries are more or less already in a column. However, I need to convert the A, C, G, T rows to columns.

Any help would be appreciated.

schneidz 03-04-2014 07:44 PM

What have you tried and where are you such (I mean stuck... stupid table) ?

I think awk would be useful here.

szboardstretcher 03-04-2014 08:27 PM

Question: Does this have something to do with Genomes?

mphillips67 03-04-2014 08:33 PM

I haven't tried anything specific yet. I found a few methods to convert rows to columns, most using sed, but I am at a loss for how apply that to what I am working with. I basically would like to convert the A, C, G, and T rows to columns, then have their entries line up to the positions.

Also, yes, these are nucleotide frequencies at a given position.

szboardstretcher 03-04-2014 08:36 PM

Is this the fixed format of the data? Its repeated EXACTLY like this over and over(allowing for different data obviously)? Like this, in perpetuity:

Code:

Pos       
148       

A        0
C        0
G        0.081985
T        0.918015
207       

A        0.021697
C        0.978303
G        0
T        0

Pos       
148       

A        0
C        0
G        0.081985
T        0.918015
207       

A        0.021697
C        0.978303
G        0
T        0

Pos       
148       

A        0
C        0
G        0.081985
T        0.918015
207       

A        0.021697
C        0.978303
G        0
T        0

and so on

If not, can you post a selection of the actual data? EXACTLY how it is in your file?

mphillips67 03-04-2014 08:49 PM

That is almost exactly how it is, except the "Pos" and "freq" aren't repeated. Here is a copy of lines directly from the file:

Pos freq
148

A 0.000000
C 0.000000
G 0.081985
T 0.918015
207

A 0.021697
C 0.978303
G 0.000000
T 0.000000
208

A 0.979209
C 0.000000
G 0.020791
T 0.000000

grail 03-04-2014 09:30 PM

Try searching for columns to rows on this site as this has been done multiple times. I would include the key word 'awk' as well as columnized data is better suited to this command than sed

szboardstretcher 03-04-2014 10:01 PM

And, without diving into a code excercise, and sticking to bash alone:

Requirements:
  • remove pos at beginning of file
  • structure stays the same throughout the file

Code:

while read line1; do #first line pos
  read line2 # blank
  read line3 # A
  read line4 # C
  read line5 # G
  read line6 # T
  echo $line1 ${line3:1} ${line4:1} ${line5:1} ${line6:1} >> output_file
done < data


mphillips67 03-04-2014 10:26 PM

That did the trick. Thank you!

grail 03-04-2014 11:50 PM

hmmmm ... I am curious how that did the trick for you?

Based on the data and format in post #6 and using the code from post #8, the output I get is:
Code:

Pos freq 0.000000 0.000000 0.081985
T 0.918015 0.021697 0.978303 0.000000
T 0.000000 0.979209 0.000000 0.020791

Now to me this does not look like your desired output?

Assuming you altered the snippet provided, maybe you could show your solution that does provide the output you were looking for, so others may benefit :)

mphillips67 03-05-2014 12:21 AM

Yea, the file had to be altered slightly for it to work. Based on the requirements szboardstretcher outlined, I removed the first line containing "Pos" and "Freq". I can't attest to whether or not its the most efficient way, but I basically did it in two steps as shown below:


Code:

more +2 oldfile > newfile

while read line1; do #first line pos
  read line2 # blank
  read line3 # A
  read line4 # C
  read line5 # G
  read line6 # T
  echo $line1 ${line3:1} ${line4:1} ${line5:1} ${line6:1} >> output_file
done < newfile

The columns don't have labels, but I will be working with these files in R so will take care of that when I read them in.

grail 03-05-2014 02:39 AM

Cheers :)

So now, using your addition, the correct output is:
Code:

148 0 0 0.081985 0.918015
207 0.021697 0.978303 0 0
48 0 0
G 0.081985 07 0.021697 0.978303
G 0 os 48
A 0 0.081985 0.918015 07
A 0.021697 0 0


szboardstretcher 03-05-2014 07:54 AM

Grail,. not sure what you are doing..

Edit: You are using the original data. OP provided another snippet further down in the post.

Code:

# The data
[root@dev ~]# cat data
148

A 0.000000
C 0.000000
G 0.081985
T 0.918015
207

A 0.021697
C 0.978303
G 0.000000
T 0.000000
208

A 0.979209
C 0.000000
G 0.020791
T 0.000000

Code:

# the script
[root@dev ~]# cat read_data.sh
while read line1; do #first line pos
  read line2 # blank
  read line3 # A
  read line4 # C
  read line5 # G
  read line6 # T
  echo $line1 ${line3:1} ${line4:1} ${line5:1} ${line6:1}
done < data

Code:

# the output
[root@dev ~]# ./read_data.sh
148 0.000000 0.000000 0.081985 0.918015
207 0.021697 0.978303 0.000000 0.000000
208 0.979209 0.000000 0.020791 0.000000


allend 03-05-2014 08:34 AM

Just to finesse this a little for posterity's sake, grail has a point about the substring indexing (which actually starts at zero) in the parameter expansions used in the echo command. It works with :1 as the leading blank is skipped.
Using the data in post#6
Code:

tail -n +2 data.txt | \
while read line1; do
 read line2;
 read line3;
 read line4;
 read line5;
 read line6;
 echo "Pos ${line3:0:1} ${line4:0:1} ${line5:0:1} ${line6:0:1}";
 echo "$line1 ${line3:2} ${line4:2} ${line5:2} ${line6:2}";
done

Produces
Code:

Pos A C G T
148 0.000000 0.000000 0.081985 0.918015
Pos A C G T
207 0.021697 0.978303 0.000000 0.000000
Pos A C G T
208 0.979209 0.000000 0.020791 0.000000


grail 03-05-2014 10:31 AM

@szboardstretcher - you are right that I am not sure what was happening?? I have now run the same code at home and all is fine ... go figure ... sorry for the confusion.

I should give a solution for all those shenanigans:
Code:

awk '{ORS=/T/?"\n":OFS}!/^$/{print $NF}' file


All times are GMT -5. The time now is 11:30 AM.