ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi! I need to delete the first and last lines of a bunch of files. I *believe* this should be easy, as both the first and last lines all contain the phrase 'GSHES'.
The big problem is that I want to do this to hundreds of files. I need something that will go through and just whack off the first and last lines of every file.
sed '$d' *.* > NOLASTLINES.dat
will lop off all of the last lines just fine, but I still have the first lines, and now there's only ONE first line! So I can't then do, say:
This smells of homework to me, so here is my oblique reply.
Grep will filter based on the contents of lines. You can filter for a pattern, or for not a pattern. The manual page will tell you what option to use for this. Search for "invert".
Sed is rather more flexible than grep. You can specify a range of input lines to operate on, and some operation(s). The thing to know if that it goes through a cycle of reading input, evaluating addresses and executing commands. You can use the q command to quit before other commands are executed, so you can probably imagine how to avoid printing the last line. Also, read what the -n option does in the manual page.
I haven't got a terminal to test&try in, but I suspect this could be achieved with head, tail, grep and echo as well. I would probably wrap them into a script and then use find (or something else) to produce a file list and execute the script on each file then (can be done using xargs for example, or just find). The script would use head and tail to grab the first/last line(s), and grep+echo would then write everything except those lines back to the original file. Something like that; I haven't got a ready script, but I imagine it's not too difficult..efficiency is another matter then (is it faster to use this method, or sed, or something else, for a large number of big files..), but anyway this should work, and since those commands are basic tools (grep, head, tail, echo, maybe xargs, find or whatever needed to make it work the way you need), it should be portable and pretty easy to create too.
I tend to start off with the simplest method I can think of, because there is no sense in shooting a fly with a shotgun. Rather use a needle against a bull, if it does the job..
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.