Manager memory requirements for the intel framework

It remained at 27G after many cycles of replacing the input file with 18K new unique items.

    That is interesting because by default the intel framework doesn't expire items, so every time you replaced the file you were loading an additional 18k items..

I took a closer look and see that the manager virt value does increase about 22M with each update, which is roughly the sum of the starting Intel::data_store and Intel::min_data_store sizes (21M) as reported by the global_sizes() table. it was masked by the 27G value before I commented out the "event Intel::new_item()" call. Makes sense, we must be adding that sum each time we run the update.
    If I get a chance I will resurrect the benchmarking code I was working on a while ago.. It would do things like create a table of hosts and add 10k,20k,30k,40k hosts to it and see what the memory usage was for each count to see what the real work data usage is for different sized data structures. I never tried it with the intel framework though.

That would be handy! Can we help with that? I need to generate intel input files in tiered sizes for testing. If your code is written in C/C++ or python, it might make a great starting point for that.
    > We commented the conditional that invokes “event Intel::new_item(item)” in base/frameworks/intel/main.bro to disable remote synchronization with the workers, and the huge VSize disappeared.
    This makes more sense.. I don't think your memory usage has anything to do with the intel itself, I think the communication code is falling behind.

Okay. I'm curious why the heap isn't released afterwards, though. Or maybe the communication never completes? I don't know how event processing works, but wonder if the remote Intel::new_item event is "stuck" in the worker if it's processed synchronously in the same thread that's waiting for network input. I never tried this test with traffic running to the test system, but will do so. I did run a heap check on a manager running in debug mode (-m), and confirmed it wasn't leaking memory.
    How many worker processes do you have configured? Are they running on the same box or separate boxes?

32, so lots of communication for intel updates.
    If you load up 18k indicators but have 100 worker nodes, the bro manager needs to send out 1,800,000 events to all the workers. if the workers can't keep up, that data just ends up buffered in memory on the manager until it can be sent out.

Thanks for the help!