Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I am running a java application from my linux machine, which connects over internet to a remote computer on a given port. Now, during the run of the application, if the internet link goes down, then the application behaviour is unpredictable, so I want to reproduce the timeout problem by dropping the incoming and outgoing packets from the given IP using iptables. Following are the rules I am using for dropping the incoming/outgoing packets :
Now, if I apply these rules before starting my application, then it properly drops the packets and application gives timeout. But let say I haven't applied these rules at the start of the application, (means the IP is not blocked when the application starts), so application connects to the remote IP on the specified port, opens input/output streams on it. If I apply the above rules now to disable the incoming/outgoing traffics from the given IP, then it doesn't work, means it will transfer data on both the input and output streams properly and no timeout comes.
What is the problem here and what would be the work around?
Your rules are only going to drop the initial syn packets that establish the connection. It's likely that once the connection has been initiated and the stream established, you won't see any further syn packets. I'd guess that you have a default policy of accept or some other rule that allows the rest of the traffic through (something like a rule allowing established/related traffic or some rule that allows non-syn traffic through). It's hard to say specifically without see you entire firewall ruleset, so post them if you want specifics.
If you wanted to mimic the effects of the link going down, you could probably do something a little more drastic like:
iptables -I INPUT -j DROP
iptables -I OUTPUT -j DROP
iptables -I FORWARD -j DROP