file descriotor performance question
During real-time file monitoring in progress, many file discriptors are created. Would there be any impacts on performance if a large numbers of file discriptors generated?
(In case of approaching the maximum count. Massive amount of file discriptors are opened from fanotify event.) |
You need to check the implementation (if you want to be sure), I think the answer is no.
|
Chewing up kernel resources unnecessarily is never a good thing - why do you have so many fd's ?.
|
Quote:
Analyze the file based on the open fd. (yara) |
That is a symptom, not the reason - perhaps you should look at your design.
|
yes, actually I was thinking about the following question[s]: What are you talking about, what kind or performance do you mean? How much is that "many file descriptors" at all?
|
This sounds like the old joke in which the patient says "Doctor, it hurts when I do this." The doctor replies "Don't do that!"
Seriously, try not to write or run piggy software. Ed |
I am making anti-virus real-time monitoring system (using fanotify).
Fd is opened when a fanotify event (like open, access, close) occurs. If the file scan takes a long time, the fd is stored in the queue with open status. In fact, I'm a Windows platform developer and I didn't understand Linux enough, so I asked. Thanks for all comments. Currently, when an event occurs, the file path will be collected, fd is immediately closed, stored in the queue, and then opened again for use. |
I guess this is how it is planned to work and it is ok. But if you find any performance related problem we can discuss it (for example here)
|
Maybe it would be better to do the fanotify_init on the root, and only have the one fd (permanently) open ?. You still get the notifications without the continual setup/teardown cost.
|
All times are GMT -5. The time now is 07:27 AM. |