Code:
sed 's/document.*adminsown.*$//'
This seems a bit risky to me, as it relies on a few keywords plus greedy globbing patterns. There is a non-zero risk of hitting false positives. In particular, the the common word "document" could appear more than once in the line.
Code:
sed -i 's/.\{141\}$//'
Uh-oh. This will remove the last 141 characters from
every line of every file fed to it! At the very least we need to give it a target address of only the last line of the file. Even better would be to put at least a few of the actual string characters in it.
Also, use the
-r option to avoid having to backslash the brackets. Set the
-i option so that it creates backup files too. You can always run a quick
find command to remove them later when you're sure everything went as planned.
Code:
sed -i.temp.backup -r '$ s/document\.write.{117}iframe>');$//'
Finally, be aware that the above use of
xargs will fail on filenames that contain whitespace. See here for how to handle them safely:
How can I find and deal with file names containing newlines, spaces or both?
http://mywiki.wooledge.org/BashFAQ/020
Edit: I'd personally go with a
while+read loop instead:
Code:
while IFS='' read -r -d '' fname; do
if grep -q "adminsown" "$fname"; then
sed -i.temp.backup -r '$ s/document\.write.{117}iframe>');$//' "$fname"
fi
done < <( find . -type f -print0 )
Maybe a bit slower, but probably safer and easier to manage.
Edit2:
Thinking a bit more, you should be able to replace the
find command with a simple recursive
grep, a simplify the loop:
Code:
while IFS='' read -r -d '' fname; do
sed -i.temp.backup -r '$ s/document\.write.{117}iframe>');$//' "$fname"
echo "$fname" >> logfile
done < <( grep -R -D skip -I -s -l -Z "adminsown" / )
I also added a line that prints the filenames to a logfile, for later tracking.