Shell script overhead for realtime processing
Hello all,
While working on a small digital video application (with character based I/O at 4M bytes/sec). A question arises about shell overhead. Currently, data is sourced from /dev/video0 where it is passed to stdin of the first process. Output from this first process is then piped from stdout to the next process. All of the pipe connections, and process management is done using a shell script. The other obvious option is for the first process to directly open /dev/video0, and process the data. Assuming that the first process is written in the same language, with the same level of competence, what (roughly) is the burden of the shell script in this first example. I realize that this may be a difficult to answer without understanding details of the first process, but any indicators, pointers to further reading etc. will be appreciated. Thanks, Mark |
the first process reads it the same as it would if it opened it itself
first to second is the standard write->read fifo overhead 4MB per second is not much there is no overhead here from the shell itself it just sets up file descriptors and starts the processes example for starting a process with stdout redirected to.. /dev/null it would fork, close stdout (fd 1), open /dev/null (first free fd, 1) and exec the process in short the shell acts as a shell around the kernel hence the name |
Quote:
I agree with genss that there is basically no overhead; after the shell sets up the pipes it just waits (i.e. the shell process is blocked, not running). |
Your system is not 'real time'. Full stop. (https://en.wikipedia.org/wiki/Real-time_computing)
Now speaking of speed: shell scripts are slow because they keep forking to call external programs. |
All times are GMT -5. The time now is 05:08 AM. |