No, what we truly need
(and for a variety of things related to this system) is a true workload-management / background-job system that can be installed and run as a Ubuntu (LTS) service using an Ubuntu package.
By now we know more-or-less exactly what requests are "the bugaboo," and we know that our system is being flooded ... legitimately and in the normal course of web-site operations(!) ... with more concurrent requests "of a particular kind" than it can actually serve.
But, we've also got things ... related to the same system ... that truly
are "batch jobs." And some of these have resource-availability-related queueing requirements such as what a "batch job" system would handle. At the same time we've got very fast, "FastCGI-like" requests that might arrive many hundreds or thousands per second. We need a solution that can do it all.
(Note: this is an "ancient, huge, troublesome and cantankerous
... but very
profitable(!) ... 'straight-CGI' application" that runs quite well under
mpm_event, and it always will be.)
I also need to know if and how it is possible for HTTP to "push" a request to a particular connected user. In other words, is it possible for the HTTP worker, having dispatched the request, to go off and do other things ... and for the web browser
also to go off and do other things ... until the web browser is notified by some message that is
pushed to it that the requested data is now available?
Or, since many of the pieces of data that are being generated in this way are
images, can we somehow seamlessly insert an offline request processing stage into this process, without client-side JavaScript polling? And, without tying-up an HTTPD worker process for the duration?
(Assume mpm_event).)
References to existing web-pages (and
LQ posts!) that talk about similar scenarios and that describe (non-commercial) solutions are ... requested!