LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Other *NIX (http://www.linuxquestions.org/questions/other-%2Anix-55/)
-   -   Append value of a row to subsequent rows (http://www.linuxquestions.org/questions/other-%2Anix-55/append-value-of-a-row-to-subsequent-rows-764759/)

surabhinair 10-27-2009 07:07 AM

Append value of a row to subsequent rows
 
Hi,

I have source file where in the data is in the following format.

01,America,0001,AUH
03,F,F ,001,000,012,000
04,F,F ,023,000,012,000
03,F,A ,002,005,012,005,Y
04,F,A ,023,000,012,000,000
03,F,R ,003,007,007,007,Y,002
04,F,R ,023,000,007,002,002,002
03,F,O ,004,000,000,000,Y,003
04,F,O ,023,000,000,000,000
01,EGYPT,0001,AUH
03,F,F ,001,001,012,001,Y
04,F,F ,023,000,012,001,001
01,SPAIN,0001,AUH
03,F,A ,021,000,012,000
04,F,F ,023,000,017,000
03,F,A ,002,105,012,008,Y
04,F,A ,023,000,012,000,000

I want to append the second column value of records starting with 01 to subsequent records till the next o1 value.

desired output:

01,America,0001,AUH
03,F,F ,001,000,012,000,America
04,F,F ,023,000,012,000,America
03,F,A ,002,005,012,005,Y,America
04,F,A ,023,000,012,000,000,America
03,F,R ,003,007,007,007,Y,002,America
04,F,R ,023,000,007,002,002,002,America
03,F,O ,004,000,000,000,Y,003,America
04,F,O ,023,000,000,000,000,America
01,EGYPT,0001,AUH
03,F,F ,001,001,012,001,Y,EGYPT
04,F,F ,023,000,012,001,001,EGYPT
01,SPAIN,0001,AUH
03,F,A ,021,000,012,000,SPAIN
04,F,F ,023,000,017,000,SPAIN
03,F,A ,002,105,012,008,Y,SPAIN
04,F,A ,023,000,012,000,000,SPAIN

Its to be noted that the order of records matter and the count of records between 2 subsequent 01 record is is not fixed.

Any help on the same is appreciated.

druuna 10-27-2009 07:20 AM

Hi,

Something like this maybe:
Code:

#!/bin/bash

awk '
BEGIN { FS = "," }
{
  if ( $1 == "01" ) { Appender=$2 ; print $0 }
  if ( $1 != "01" ) { print $0","Appender }
}
' infile

Testrun:
Code:

$ cat infile
01,America,0001,AUH
03,F,F ,001,000,012,000
04,F,F ,023,000,012,000
03,F,A ,002,005,012,005,Y
04,F,A ,023,000,012,000,000
03,F,R ,003,007,007,007,Y,002
04,F,R ,023,000,007,002,002,002
03,F,O ,004,000,000,000,Y,003
04,F,O ,023,000,000,000,000
01,EGYPT,0001,AUH
03,F,F ,001,001,012,001,Y
04,F,F ,023,000,012,001,001
01,SPAIN,0001,AUH
03,F,A ,021,000,012,000
04,F,F ,023,000,017,000
03,F,A ,002,105,012,008,Y
04,F,A ,023,000,012,000,000

$ ./tets

01,America,0001,AUH
03,F,F ,001,000,012,000,America
04,F,F ,023,000,012,000,America
03,F,A ,002,005,012,005,Y,America
04,F,A ,023,000,012,000,000,America
03,F,R ,003,007,007,007,Y,002,America
04,F,R ,023,000,007,002,002,002,America
03,F,O ,004,000,000,000,Y,003,America
04,F,O ,023,000,000,000,000,America
01,EGYPT,0001,AUH
03,F,F ,001,001,012,001,Y,EGYPT
04,F,F ,023,000,012,001,001,EGYPT
01,SPAIN,0001,AUH
03,F,A ,021,000,012,000,SPAIN
04,F,F ,023,000,017,000,SPAIN
03,F,A ,002,105,012,008,Y,SPAIN
04,F,A ,023,000,012,000,000,SPAIN

Hope this helps.

ghostdog74 10-27-2009 07:56 AM

Code:

$ awk -F"," '$1=="01"{s=$2;print;next}{print $0","s}' file
01,America,0001,AUH
03,F,F ,001,000,012,000,America
04,F,F ,023,000,012,000,America
03,F,A ,002,005,012,005,Y,America
04,F,A ,023,000,012,000,000,America
03,F,R ,003,007,007,007,Y,002,America
04,F,R ,023,000,007,002,002,002,America
03,F,O ,004,000,000,000,Y,003,America
04,F,O ,023,000,000,000,000,America
01,EGYPT,0001,AUH
03,F,F ,001,001,012,001,Y,EGYPT
04,F,F ,023,000,012,001,001,EGYPT
01,SPAIN,0001,AUH
03,F,A ,021,000,012,000,SPAIN
04,F,F ,023,000,017,000,SPAIN
03,F,A ,002,105,012,008,Y,SPAIN
04,F,A ,023,000,012,000,000,SPAIN


druuna 10-27-2009 08:04 AM

@ghostdog74: That's a nice, short one-liner!

Telemachos 10-27-2009 09:07 PM

For fun, a Perl option:

Code:

perl -ple 'if (m/^01/) { $loc = (split /,/)[1] } else { $_ = "$_,$loc" }' file
More awk-ishly Perl:

Code:

perl -F, -plae 'if ($F[0] == '01') { $loc = $F[1] } else { $_ = "$_,$loc" }' file
(I like the first version better since it only splits the '01' lines, and in a big file that might amount to a significant difference.)


All times are GMT -5. The time now is 10:13 PM.