Look for duplicates in folder tree A and folder tree B, then delete the duplicates only from A.
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Look for duplicates in folder tree A and folder tree B, then delete the duplicates only from A.
I would like to find identical files which are in folder or directory tree A and in folder or directory tree B. (The tree might consist of just one folder). Then I want to delete the duplicated files from tree A only, and not delete anything from tree B.
What is the best way to do this please?
The FS program lists duplicates, but there is no method that I am aware of for getting it to highlight the duplicate files in tree A only. You would have to go through the list manually and manually select those files in tree A, which would be too big a job to do if a large number of files are involved.
Thanks.
Edit: I should have typed FSlint, not FS.
Last edited by grumpyskeptic; 10-14-2018 at 04:09 AM.
I know not of this "FS" program of which you speak, but I refuse to allow dup finders to automatically delete files. It will delete things you don't want it to. Maybe not this time, but it will happen.
fdupes and fslint are probably the best known CLI ones, but I use duff as it allows me to put decent regex into th mix. But I always generate a list I can script later into a "rm" loop.
Much safer, and a trivial to knock up.
Maybe something like this will work, (not tested).
Code:
ls -a A > A.lst;ls -a B > B.lst; cmp A.lst B.lst > duplist
Plus,maybe.
Code:
grep A
I think it could be dangerous to implement such an algo based on filenames only...except if this is OP's requirement (but the OP didn't define what he called "duplicates/identical files" though...is it same filenames and/or same contents?).
Maybe a safer approach would be to go with hashes? Otherwise with a specialized tool but I have no experience with any...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.