Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I hear criminals are infecting thousands of computers in order to launch distributed denial of service attacks and slow down to the point of unusability even sites with a lot of bandwidth.
My question is whether the following would work against a small-scale ddos attack:
Say a very large number of friends offered their computers as web servers duplicating the same html content, and say a central server were available that did nothing but generate links to working web servers among these, much like p2p filesharing but with html pages being shared and this happening transparently to the visitor (ie the visitor sees normal text and images in their browser), and if N web servers took part in this at a given time where N is large, like 1,000,000, how large would N have to be in order to prevent a significant slow-down when attacked by ddos from M other computers each with the same average bandwidth as any of the web servers?
PS. Please don't say that attacker ip's can be blocked, I just want to know how much more collective bandwidth the web servers must have relative to the attackers for ddos attacks to have no significant effect. Note the web servers can all play the role of the initial server too, that server is only an entry point you go to once and it serves no content.
It would take them only a very short time to find your central server. Then they attack it, bring it down, and then there is not list of links. Essentially your central server is just acting as a DNS.
It would take them only a very short time to find your central server. Then they attack it, bring it down, and then there is not list of links.
Why don't they do the same to the entry server of a p2p network? Surely Holywood and many others are losing billions to p2p filesharing, someone must have been pissed off enough to pay underground organisations to attack these p2p entry servers to extinction.
Even an existing p2p server can be used as the entry point. Perhaps transparently to the user if java accesses the p2p server.
By the way, the entry server is only required for discovering people, after that it's 1000000 servers against 1000 dos bots. The question remains, how many servers can each ddos bot slow down significantly?
Thanks, I just wanted to know how much bandwidth ddos can consume compared to the bandwidth it is launched from, and therefore whether distributed web pages like freenet's have any chance against ddos if enough people take part in the effort. I'm not interested in the anonymity of it, just the robustness.
Why don't they do the same to the entry server of a p2p network? Surely Holywood and many others are losing billions to p2p filesharing, someone must have been pissed off enough to pay underground organisations to attack these p2p entry servers to extinction.
Even an existing p2p server can be used as the entry point. Perhaps transparently to the user if java accesses the p2p server.
By the way, the entry server is only required for discovering people, after that it's 1000000 servers against 1000 dos bots. The question remains, how many servers can each ddos bot slow down significantly?
my understanding is if the tracker goes offline, those p2p peers already connected to each other can still send data to each other. The tracker is only required for the discovery of new peers.
Do you reckon an existing dns server can play the role of the tracker if helped by a secret server that keeps editing the dns record that maps the domain to an ip?
Fortunately, most HTTP-based DoS attacks we have seen have a particular weakness - they are vulnerable to a technique known as "tarpitting"....Tarpitting works by taking advantage of TCP/IP's idea of window size and state. In a tarpitted session, we respond to the connection initiation as normal, but we immediately set our TCP window size to just a few bytes in size. By design, the connecting system's TCP/IP stack must not send any more data than will fit in our TCP window before waiting for us to ACK the packets sent. This is to allow connections to deal with packet loss that might occur in a normal session. If the sending system doesn't get an ACK to a packet sent, it will resend the packet at increasingly longer intervals. In our tarpitted session, we simply don't ack any of the post-handshake packets at all, forcing the remote TCP/IP stack to keep trying to send us those same few bytes, but waiting longer each time. With this, bandwidth usage falls off quickly to almost nothing.
There seems to be an added side-effect to mitigation by tarpit. When the attacked host mitigates with an iptables DROP (no response), the attacker's CPU load is fairly minimal and the system is responsive. However, as demonstated by the graphic below, under tarpitting the CPU load in our test system quickly rose to 100% as the attacking system's kernel tried to maintain a large number of open connections in a retry state.
It takes too long for the new IPs to propagate through all the DNs.
Why is that a problem? When waves are propagating along a pond you observe an oscillating water level no matter where you are in the pond.
Quote:
Then there is the issue with DNS caches, which can take a day or more to clear.
There are services like DynDNS that automatically update your dns record every time your dynamic ip changes, recommended for when you want to run a web site at your own computer. Haven't tried them, but are you saying that every time your ip changes your site will be inaccessible for a day or more to some people? Then that DynDNS service is almost a scam.
In our tarpitted session, we simply don't ack any of the post-handshake packets at all, forcing the remote TCP/IP stack to keep trying to send us those same few bytes, but waiting longer each time. With this, bandwidth usage falls off quickly to almost nothing.
This seems very promising. Why doesn't every web server do it? Or maybe they only do it if attacked.
Ok, when you change your IP not just one server has to know about it, every DNS (that people that are trying to contact you are using) has to be informed of the change. There are a LOT of DNSs out there (maybe a million?). When you change your IP the service you are using has to send it up the chain until it hits a master DNS (wrong name but right idea, lack of caffeine), then it contacts all the other master DNS around the world, and then each master DNS begins sending it down the line(think of the food chain). While the master servers are talking to each other almost continuously, each of the lower levels generally do not. Depending on how things are configured it may be a day (or more) before a particular DNS gets updated to your change. Service like DynDNS are the only reason that these changes (for us little people) can get changed this fast. One used to have to jump through a ton of hoops (roughly a month) in order to get things changed(pre www).
This seems very promising. Why doesn't every web server do it? Or maybe they only do it if attacked.
From what I've gathered, there's a legal question about tarpits:
Quote:
In the days following the July 2001 Code Red worm outbreak, which infected 359,000 systems in 14 hours, software developer Tom Liston started work on an application that would turn the tables on worms. He created LaBrea, which essentially acts like a digital tar pit, trapping hackers and worms, forcing hackers to break off attacks, and preventing worms from moving on to other computers.
The free, open-source application has been heralded in security circles and nominated for awards as a unique weapon. It's also been pulled from Lipton's Hackbusters.net site by its author. He yanked it April 15 when the Illinois resident learned that a 4-month-old state law (Compiled Statutes 720 ILCS 5) makes it illegal to create a device capable of disrupting a communication service without the express authorization of the communication service provider.
The law also makes it a crime to conceal the existence, origin, or destination of any communication from a service provider or any lawful party.
Technically, LaBrea disrupts communications and conceals the true origin of network communications. So Liston pulled LaBrea rather than risk prosecution for what he believes is, at best, a vaguely worded piece of legislation. www.informationweek.com
I am not a lawyer, so I can't advise you on the legal aspects of using a tarpit.
Ok, when you change your IP not just one server has to know about it, every DNS (that people that are trying to contact you are using) has to be informed of the change.
But here it is not the ordinary web, here all ip's have the same content so you don't mind if the ip you get is out of date. The bad guy is getting an out-of-date ip too, and can attack it, and repeat with the next ip after a while, and the next ip and so on. But how many ip's can he attack from the M computers he owns? We go back to the original question.
Quote:
it may be a day (or more) before a particular DNS gets updated to your change. Service like DynDNS are the only reason that these changes (for us little people) can get changed this fast.
A day's time seems a fast change? There must be more to DynDNS than just getting your current external ip and writing it to a single dns server. Otherwise, a day's time would render it a scam.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.