[SOLVED] Sorting a Recursive Directory Listing by time without dividing into subdirectories
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So I wind up with many different lists of files by date modified, one for each subdirectory. Is there a way to get a list with similar details (and still showing the full path to each file, so removing the entries with sed and resorting won't cut it), but all in one list?
So I wind up with many different lists of files by date modified, one for each subdirectory. Is there a way to get a list with similar details (and still showing the full path to each file, so removing the entries with sed and resorting won't cut it), but all in one list?
Try:
Code:
ls -lRt1
The files are still sorted per directory, but you get a full path for each file. Also each directory is time sorted. Of course this will meet your sed criterion.
Excellent, thank you very much. The first command works, but it's the opposite order of what I would like. How would I go about reversing the order of the output so the newest one is on the top? Also, is there a way to exclude certain directories (like cache, etc)? And finally, the columns don't line up neatly--I assume that's what you were trying to do with the second command?
The second command resulted in unreadable output, though--it fragmented filenames all over the place. Maybe some of my filenames were too long?
(And Larry, your command simply didn't work-- 1 didn't give a full path for each entry.)
Excellent, thank you very much. The first command works, but it's the opposite order of what I would like. How would I go about reversing the order of the output so the newest one is on the top?
Just reverse the output of the sort command using the r modifier:
Also, is there a way to exclude certain directories (like cache, etc)?
Yes, you have to refine the search criteria of the find command. What exactly are the directories you want to exclude? All the hidden ones or some specific subfolder like the browser cache one?
Quote:
Originally Posted by scorchgeek
And finally, the columns don't line up neatly--I assume that's what you were trying to do with the second command? The second command resulted in unreadable output, though--it fragmented filenames all over the place. Maybe some of my filenames were too long?
The awk part just puts a colon in the time field, the column -t command formats the output. What do you mean by fragmented filenames all over the place? Please, can you post an example of the wrong part of the output?
Let's begin from the excluding hidden directories part. I use the -regex predicate of the find command like this:
Code:
find ~ \( ! -regex '.*/\..*' \) -type f
In a similar way you can exclude any other directory name based on a pattern/regexp. Another approach is to use -prune but if the -regex does its job, you can spare time and maybe an headache for now.
Regarding the long lines problem, it occurs where file names have spaces. The column command uses spaces as separator by default, so that a file with blank spaces is split over multiple columns. To avoid that you might uses another separator, e.g. TAB but you have to specify it in all the commands. Let's try:
However I noticed a problem using this approach. It appears that the column command has a limit on the number of rows he can manage or maybe on the size of the input. If I run the command above on my machine, I get 18665 files, whereas if I remove the column command from the pipe I get all my 125890 files. I need further investigation about this issue.
An alternative to the column command is the usage of printf in the awk statement. Since we have a fixed number of fields to print out (we decide them in the find command) we can establish a fixed format for each field, e.g.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.