Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am passing files and directories, so I can use grep on certain patterns.
Would like to remove duplicate files. I am using the following, which obviously fails, if files in directory match files named supplied directly in #@.
Code:
declare -A tag
for fda in "$@"; do
[[ -f $fda ]] || [[ -d $fda ]] || continue # invalid entry
[[ ${tag[comint:${fda}]+E} ]] && continue # existing entry
tag[comint:${fda}]=1
fdir+=( "$fda" )
[[ -f $fda ]] && fla+=( "$fda" )
[[ -d $fda ]] && dra+=( "$fda" )
done
For instance, running the function mysearch to search for .rc files in current directory (.) using -incl .rc.
Code:
./mysearch -p "Gnu" --incl .rc *.rc .
This produces the following files, which are repeated
Merge the two lists into a single list, sort the combined list, read through the sorted list looking for two or more consecutive instances of the same name and delete the duplicates.
There are utilities that can do this. But if you want a script, this will find all files that have duplicates, based on md5sum. You can then manually delete the duplicates.
Another option is to use fslint. It is a fantastic program that finds duplicates and allows you to delete/rename them if required. It has both GUI and command line modes but I have only used the GUI mode
Another option is to use fslint. It is a fantastic program that finds duplicates and allows you to delete/rename them if required. It has both GUI and command line modes but I have only used the GUI mode
yes, it is implemented several times on several different languages.
Look for dupfinder, duplicate finder, fdupes or similar.
looks like fslint is continued here: https://github.com/qarmin/czkawka
It’s much faster than literally anything else, because it doesn’t read the contents of the files.
fdupes starts by just comparing file sizes, then md5 and finally compares the file contents. So, it is actually both fast and safe. It also doesn't crash on dangling symlinks.
Interesting..
I didn't know that it was not maintained any more. Since it works without issues on my machine (Debian 11 Unstable) so far I didn't see any need to visit the upstream site.
Interesting..
I didn't know that it was not maintained any more. Since it works without issues on my machine (Debian 11 Unstable) so far I didn't see any need to visit the upstream site.
Debian 11 is stable, not unstable. Debian unstable does not have a version number. How did you install fslint?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.