If the order does not matter, and all rows in each file are unique (no two identical rows in the same file), then the fastest is probably:
Code:
cat file1 file2 | sort | uniq -d | cat file1 - | sort | uniq -u
which would only display lines from file1 which are not in common between file1 and file2. If you want the reverse (lines in common), then it is simpler:
Code:
cat file1 file2 | sort | uniq -d
If the above conditions are false (identical rows or important order), then to some extent (some thousands of lines), this should be a good solution:
Code:
grep -vxf <(sed 's/[]\.*^$[]/\\\0/g' file2) file1
which gives lines in file1 that are not in file2.
To output the result to a "file3" file, just append
to any of the above commands.
Yves.
[edit:]I had forgotten the 'f' option in grep. Now it is ok.[/edit]