Using cut on file full of ls -l output to display only filenames
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Using cut on file full of ls -l output to display only filenames
I have a file that contains "ls -la" output. I would like to display only the filenames, none of the other information before it such as permissions, ownership, size, and date.
Would the cut command be the best way to hit this, or should I use Vim or sed?
That's close, but the ls -la output is already in a file. So I just want to run awk against that file and strip out everything before the filename (the 9th field) on each line.
Edit: Looks like this will work: awk '{print $9}' file > newfile
That's close, but the ls -la output is already in a file. So I just want to run awk against that file and strip out everything before the filename (the 9th field) on each line.
Edit: Looks like this will work: awk '{print $9}' file > newfile
It will if you happen not to have any files with spaces in
their names.... otherwise a slightly more convoluted approach
may work:
I very much liked Tinkster's method using awk. In my case, I needed to eliminate "symlinks" from the list of files, and then store the filename only in a file used by "tar" to create an archive. I'm already at the directory I wish to "tar", so all I had to do was this:
I then used /tmp/xfiles as the tar-control file for creating the tar-file. The initial grep eliminates the symlinks, and the final grep eliminates any empty lines, such as those created at the start of the "ls -l" output. I didn't need to use the -a option. That would have taken more work to clean up the file list.
Lastly, I should elaborate on the "symlink" problem. In Linux (Ubuntu), it's possible to have a lower-case or mixed-case symlink to an all-UPPER-case filename, in the same directory. Creating a tar-file from that directory, and moving it to a Mac-system ends up with a file you can't unzip because Macs are case-blind when it comes to filenames, so the original file AND the symlink created a conflict which makes it virtually impossible to unzip the zip-file. Therefore, I had to eliminate the conflicting symlink while making the zip-file.
in general you must not use grep|awk|sed|grep (or similar) chains, it can be usually solved by a single awk/perl/python/whatever script.
In your case for example:
Code:
# instead of
grep -v '^l'
# awk can be used:
awk '/^l/ { next }'
# instead of
grep -v '^$'
# you can use
/^$/ { next }
# instead of sed
# you can try gsub
so in general you can try something like this:
Code:
ls -l | awk ' /^l/ { next }
/^\s*$/ { next }
{ print $9 }' > /tmp/xfiles
The suggestion from pan64 fails in two ways: 1) it leaves a blank line caused by the "Total xxxxx" at the start of "ls -l", and 2) it only captures the first non-blank portion of filenames with embedded blanks.
The suggestion from MadeInGermany is clever, if what's in 'file' is the output from "ls -l". It reads the first eight parts and discards them, and leaves me with whats left, but it also leaves me with a blank line from "Total xxxxx".
What I need to save is the complete filename following the first eight fields in "ls -l", with no leading blanks and no blank lines. I need to use "ls -l" to eliminate "symlinks" with "grep -v '^l'", then extract the complete filenames. MadeInGermany came close, but I can't hand the 'file' to 'tar'.
The suggestion from pan64 fails in two ways: 1) it leaves a blank line caused by the "Total xxxxx" at the start of "ls -l", and 2) it only captures the first non-blank portion of filenames with embedded blanks.
I did not want to give you a full solution (and actually I do not know what do you really need), but I gave you an idea which can be easily extended to fulfil all your needs.
Quote:
Originally Posted by Dickster2
The suggestion from MadeInGermany makes no sense. What's in the "file"? If it's the output from "ls -l", then it reads the eight parts that I want discarded from "ls -l", and leaves me with just the first non-blank portion of filenames with embedded blanks.
Probably you do not understand it, but it does make sense. file actually is a file, especially this is the file you were talking about
Quote:
I have a file that contains "ls -la" output.
and the tip he gave will print the filenames.
Quote:
Originally Posted by Dickster2
What I need to save is the complete filename following the first eight fields in "ls -l", with no leading blanks and no blank lines. I need to use "ls -l" to eliminate "symlinks" with "grep -v '^l'", then extract the complete filenames.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.