"No, probably not."
Think of it this way, instead ... your (user-mode) application needs to process "a TO-DO-LIST of images," as fast as it can. Each of the processes that are responsible for "processing images" (there should be only one, if the processing is not I/O-dependent; or more than one, if it is ...) grabs a work-request off of this queue, processes it, and sends it on its way down another queue.
There's a high-priority task which gets "immediately" notified by an incoming ethernet packet. (You don't need to dumpster-dive down to the hardware interrupt level: just listen on a port.) This task collects the incoming request and pushes it onto the TO-DO list.
There's another task that services the queue of completed requests. It runs on equal priority with the image-processing tasks, so that the outbound queue doesn't get stoppered-up, but it has relatively little to do.
The architecture thusly described is ... "flexible." It's connected, so to speak, by rubber hoses and storage tanks, so that any momentary variances in the actual workload can be "flexibly" absorbed by any of them, but the workflow system as a whole will always process the load as rapidly as it can.
After all ... the "timeline that must not be missed" simply tells the system that "a new unit-of-work has just arrived." The task that's responsible for responding to this is given a (slightly...) increased dispatching priority so that it can pre-empt the image-processing tasks, gather the new request, put it on the queue, then go back to sleep again.