Can anyone provide me with an example of how can I use PYBROKER and a python script to update a table inside a running bro cluster (bro workers to be exact) ?
There is still ongoing work on the upcoming config framework which will enable dynamic reconfiguration of lots of parts of Bro. We built it within Corelight and an initial implementation has already been pushed into a branch in the Bro repository but it's going to be improved before being merged into master.
I would still recommend playing with pybroker if you're interested in it though. If you can solve a point solution more easily and quickly for yourself you shouldn't let upcoming yet imcomplete work stop you!
Indeed, I was also going to ask that. We did some performance measurements when we first wrote it - and it actually is quite fast. There only is a relatively low amount of components between the input reader and it storing things in a table; I cannot be 100% sure, but I doubt that other ingestion methods can be much faster. (I actually doubt that they will be faster at all).
By slow I mean that writing to a file on a remote machine will have network and IO (read and write) strains.
I suppose having something like ZeroMQ or some syslog messaging framework will be more efficient.
On my case, I have a file that is being updated with 3+ lines per sec (each line has 3 fields). This file is being mapped to a table (&create_expire=10min).
Upon a new connection I check if orig_h is in this table and assign a field accordingly.
I see that many orig_h’s are not recognized even though they exist in the file.
Seth, can you please address me to a branch that includes this reconfigurable bro framework ?
Well, its hard to provide you with this information
Right but you are saying things are not efficient and slow, so I would expect that you have numbers to back up these statements.
How fast are you expecting this table to update? 1ms? 1s? 30s? 'slow' is relative, and you haven't said what your expectations are, just that things aren't working.
As a process, writing to a remote file and reading from that remote file into a bro table, it is not the most efficient way to perform such a task.
What remote file? Are you using NFS or something? Again you are saying things are not efficient, but you have not provided any information that proves this.
You said you are writing 3 lines per second and that each line has 3 fields. This means that you are writing maybe 300 bytes/second to a file. This is not an extreme amount of data and it's not really possible to be inefficient with 3 lines per second.