I am not familiar with an arrow command.
Rather than cycling through all of the files and comparing it to remaining files in a list, I would run "md5sum" or "sum" on all of the files, and then locate duplicate checksum values.
You can do this for files in various subdirectories as well.
Code:
find . -maxdepth 1 -type f -exec md5sum '{}' \; >md5sumlist
cut -d' ' -f1 | sort md5sumlist | uniq -d >acopylist
grep -f acopylist md5sumlist
I think you can shorten this up using the "-w32 -D" for uniq. Then the grep command might not be necessary.
Code:
find . -maxdepth 1 -type f -exec md5sum '{}' \; | sort | uniq -w32 -D
----
Quote:
arrow --query=`find -type f DIR` --compare=`find -type f DIR`
|
1) The find command doesn't look right. The directory to base the search should come first.
2) --query=FILE1 --compare=FILE2 implies that the argument to query should be a single file instead of every file in the directory. Even if it allowed a number of files such as
arrow --compare=FILE --query="FILE1 FILE2 ..."
there could still be a problem if the number of files in the directory is to large.