Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I want to write a script that is called by the redirect_program option in squid.conf, and redirects http requests if they were to download a specially crafted jpeg.
As far as I know such a redirect program should accept the url on its standard input, and should print the rewritten url (or blank line) on its standard output. I though I might do it with bash, as I have no perl knowledge, besides, the task seemed to be simple.
My first attempt to do it:
IFS="
"
for url in `cat` ; do
echo $url >> /var/log/redir/redir.log
case $url in
*jpg)
echo "http://redirected.to.here/" ;;
*)
echo $url ;;
esac
done
But my script only works if I feed it with urls this way:
cat urllist.txt | redirect.sh
If I try to use it from squid as intended, then squid gets stalled (no pages are served any longer).
Besides, there is no url in /var/log/redir/redir.log, as if the script did not start, though I see several instances of the script running.
OK, I have been spending my last day with googling the web for anything that can start me in the right direction, but I found nothing useful.
I only found that there are very few using the redirect_program feature of squid, and the redirect programs they use are all perl scripts, and do nothing except match patterns against the input url and rewrite the url based on the result of the pattern match.
What I want to establish is, however, much more: I want the redirect program to get the destination url from squid on the standard input; call wget to download and temporarily store the destination of the input url, call clamd or my jpeg sanity checker script, check the exit codes of those, rewrite (or not) the url based on the exit codes; give the resulting url back to squid on the standard output.
Since I found not a single redirection script on the internet that would use bash, I think I should do it in perl.
All what I have now as a start is this simple perl redirector script that only rewrites one url to an other:
#!/usr/bin/perl
$|=1;
while (<>) {
s@http://www.yahoo.com@http://10.10.10.10@;
print;
}
I have no perl knowledge and I do not want to become a perl programmer just to write a small redirector script (though I know I should learn the basic syntax to do that).
Could you give me a hint how to call an external program from perl, and how to check its exit code?
Thanks for everyone who shows interest in this weblog of mine :-)
Success. And I did it with perl.
So here is my first perl script, serving as a redirector for squid, filtering downloaded executables and jpegs through clamav and my jpeg sanity checker.
If clamd finds an infected executable, then the client will be served with an error page, instead; if my jpeg sanity checker finds a specially crafted jpeg in the downloads, then the client will be served with an other image (red exclamation mark), instead.
Sure, the following script is nasty, as I knew nothing of perl yesterday and I know only a little today. Presently, the script totally lacks error handling, and there are lots of other todo's, too.
It basically works, though I subjected it to very limited testing so far.
Actually, it does not work very well yet :-(.
It only works when someone requests single files.
When complex pages are requested, the redirector script often does not seem to give a rewritten url back to squid, but a totally different, ancient url, instead.
This may be related to the "\n" printed by the script after the rewritten url. If the "\n" is not there, squid seems to be stucked. If the "\n" is there, then squid receives a surplus "\n", which mixes up things.
Principally, this may be the good old buffered I/O problem, that does not seem to be solved yet.
I put the "\n" at the end of all "print" commands, and now things are much better: squid and the redirector script stay sychronized.
Though it happened once that only a fraction of the requested page was loaded, but it was loaded fully after reloading the page.
Edit: I added a whitelist and a blacklist so as not to re-check urls that has been once checked. The redirector script has been working like a charm with three users for two weeks now. It is time to allow some more users to use it.
By the way, the redirector script (or to be more precise: clamd) catched a virus which would have been sucked in by NAV Corp. Ed.: the troian.downloader one (or troian.dropper as BitDefender identified it).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.