Problem with grep matching to end of line
Ok, I've been running linux for years and I feel like an idiot even asking this, It's probably going to be a duh answer,but anyways.
I diffed 2 files and grepped the output for just the new stuff and here was the output Quote:
diff ls-lt ls-lt.old | grep '<' | grep -o sn.0[0-3][0-9][0-9].txt but nothing came out so I played with it for a little while and when i ran this diff ls-lt ls-lt.old | grep '<' | grep -o sn.0[0-3][0-9][0-9].txt i got the "sn.0???.tx" output, but i can't get that last "t" out of it? what am I doing wrong, I can just add it to the end of every line, that's not a problem, but I'm wondering what's going on here and how to deal with it in case i have something a little less constant in the future. |
Looks like it should work - maybe try quotes on that as well.
|
Quotes did it. Don't know why I didn't try that... guess i can put my dunce hat on for the night
diff ls-lt ls-lt.old | grep '<' | grep -o 'sn.0[0-3][0-9][0-9].txt' Thank you, I was about to pull my hair out. I am still curious as to why that is needed. Is there something special about the end of the line or could it be a bug? |
You could use "cut" or "awk" to cut out the column you want.
diff ls-lt ls-lt.old | grep '<' | cut -d' ' -f5 diff ls-lt ls-lt.old | awk '/</{ print $5 }' |
ok, i'll keep that in mind. I'm writing some scripts that are going to be dealing with more complex filenames and such. I was just curious if there was something that grep was seeing that I don't know about and may become an issue later with other programs like sed and others that use regular expressions
|
Must be a problem at your machine - I cut some of your data, and the first regex (no quotes) worked fine.
|
grep start at line
How do I start grep at line, say, 200?
(My output is too large and produces a segmentation fault, so i'm going to have to grep each 100 or so lines at a time.) |
Maybe wrap it in a loop thus:
Code:
for rec in `cat yourfile` |
Mmmmm - don't know about that; lots of blanks in there to trip over the default IFS.
It were me doing this, at about this point I'd be starting to think perl .... |
Quote:
Code:
diff ls-lt ls-lt.old | awk '/</{ print $5 }' |
You can set IFS to newline only easy enough.
Perl would be faster though if the file to process is that big, just seemed a bit excessive if he just wants the last field of each rec. Also depends if this is just a one-off requirement or if its going to be run multiple times. |
Code:
awk '/^</{for (i=1;i<=9;i++ ) $i=""} |
If you want to use a range of lines, in sed you can start a sed command with a range like,
2000,2500 or use a regular expression like /\[General\]/,/$^/. You can also use brackets to try to find a match within a range. Also, using sed, you can add a quit command if you have found a match. That is commonly done to avoid reading all of the lines in a large file. Code:
2000,2500{ The cut command would be ideal for cutting out a certain field. That is exactly what the command was written for. You could use head to select the top N lines and pipe the output to cut. If you want to use a certain range of lines, you can use sed to filter through only those lines and pipe the output to the cut command: Code:
diff ls-lt ls-lt.old | sed -n "${start},${end}p;$((${end}+1))q | grep '<' | cut -d' ' -f5 |
All times are GMT -5. The time now is 11:12 PM. |