LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Kernel (https://www.linuxquestions.org/questions/linux-kernel-70/)
-   -   Process group - Feature idea and input wanted (https://www.linuxquestions.org/questions/linux-kernel-70/process-group-feature-idea-and-input-wanted-618620/)

blindmatrix 02-04-2008 12:50 PM

Process group - Feature idea and input wanted
 
Hello, I have an idea on a new feature that could possibly give a higher of system level D-DOS protection and I want your input and if it's positive I'd also wanna know how to make my idea heard, I'm not ready to add this feature myself :P

The concept is to form a process group, like when apache's started all the forks should be marked as a single big entity so that resources are throttled as such. Like an IO fairness queue should not divide the resources to all the system processes, including all apaches childes, but it should count ALL childes as a single entity so that when the load is heavy it ALL childes should get as much IO time as SSH or other services...?

Get my idea? The concept would then be to run a system call before starting fork()s and exec()s that would group them as such

processgroup_enable();
for(each process to be started)
fork();

so the processgroup setting should be shared for every child...

Now for the input, is this a good idea? Anyone had the same idea before? Pros and cons? Even if this isn't considered a good idea I would still like to know what people's reactions are ;)

/Sven

unSpawn 02-04-2008 07:12 PM

Aren't (distributed) DoSses remotely controlled network resource exhaustion attacks? I mean shouldn't that imply placing mitigating stuff in front of the "victim", network-wise?

blindmatrix 02-04-2008 09:57 PM

The concept that I'm thinking about is more to limit the damage when an attack is running by limiting the time that services can steal from other services like a file IO queue, send() recv() queues and such...

unSpawn 02-05-2008 07:26 AM

Quote:

Originally Posted by blindmatrix (Post 3046313)
The concept that I'm thinking about is more to limit the damage when an attack is running by limiting the time that services can steal from other services like a file IO queue, send() recv() queues and such...

Then you should start by defining what "damage" is and what constitutes an "attack". I mean, those are human interpretations of a situation, right? I mean, on a box with 64G RAM I may *want* to accept 20K sockets and it wouldn't constitute an attack. If that's not what you mean then maybe you mean something like "create a separate scheduler class for all processes belonging to one SID"?

blindmatrix 02-05-2008 04:05 PM

Quote:

Originally Posted by unSpawn (Post 3046711)
Then you should start by defining what "damage" is and what constitutes an "attack". I mean, those are human interpretations of a situation, right? I mean, on a box with 64G RAM I may *want* to accept 20K sockets and it wouldn't constitute an attack. If that's not what you mean then maybe you mean something like "create a separate scheduler class for all processes belonging to one SID"?

True, "damage" in my case is that the machine freezes, not that the webserver dies... In systems when one computer serves more tasks then just a webserver, like mail and databases and such, all in one box. To prevent the webserver from eating up to much of the scheduled time, but as I stated I want ideas, more of a discussion then "give me this"...

But I guess that an alternative scheduler is what I'm thinking about, both for disk I/O and CPU time... and possibly network fairness queues too...

The idea felt so good a few days ago but I'm starting to think that it wasn't as good at all :P...

The result should be something simular to putting a few machines under Xen and telling it to partition resources it evenly to all DomUs, but on a single system basis, so that services have a guarantee to get some time slices...

Considering the existing fairness support things looks good, if all subsystems only run a single process and thread. Then all services would get an somewhat equal slice of cpu and io, but the problem starts when a single service takes up more then 1 piece of time requesters. Like some sort of DOS attack which would result in the server being booged down in waiting disk IO of the disk IO channels where slow... however, if no other process requires attention then one process should be able to take 100% for itself, but say that apache is extremely heavily loaded, and so is your MTA, then it's likely that other services won't get that much time allocated... as there will be ~100 units of time allocated to the ~100 instances of apache, and say ~50 units allocated to the ~50 processes of you MTA, and then our beloved ssh server, who only has 2 processes running at this time will only get 2 units of time, this leaving the ~1.3% of the system avalible to ssh... Under these conditions it would be much nicer if the web server got 1 slot of time, the MTA got 1 slot of time and the SSH server would also get 1 slot of time, that means that the SSH servers 2 processes would get a lot more time per process then the webservers...


All times are GMT -5. The time now is 10:43 PM.