The amount of memory bro uses heavily depends on the policy scripts
that you are running. If you additionally load the script
statistics.bro you'll get a statistics.log file which should provide
you information where the all the memory has gone. One common trick is
to tune the various timeouts like the script reduce-memory.bro does.
(Note: reduce memory sets timeouts that are perhaps not suitable to
your needs)
Thanks alot for your help!
But what I am thinking is whether there is a way to control Bro,so it will not run “out of memory”
any way.
For example,Bro will malloc a upper bound of 10M memory,and if there are more packets,we just
drop them until there’s free memory(other packet has gone,and released the memory).Thus,Bro will
never go out of memory no matter how busy the network is!
As your suggest,I test Bro use:./bro -i eth1 finger.bro&.Here I modifies finger.bro:add
"redef capture_filters += { [“finger”] = “tcp dst port 80 or tcp src port 80” };"in the begigning
of the script.Since finger.bro do a little things,then we can say the memory Bro use now is less depends
on the policy scripts.Then I simulate a stream of 5000/sec http connections,and after a few minutes,the same
thing happed(out of memory):(.
Any suggestion?
Besides,if you can tell me where Bro malloc the memory space,and where and when to free them?
BEST WISHES!
But what I am thinking is whether there is a way to control Bro,so it will not run "out of memory"
any way.
For example,Bro will malloc a upper bound of 10M memory,and if there are more packets,we just
drop them until there's free memory(other packet has gone,and released the memory).Thus,Bro will
never go out of memory no matter how busy the network is!
That's not implemented in Bro. One of Bro's design goal is *not* to
lose packets (independent of the cost in resources). In practice
that's not possible and that's where the differnet tweaks like
timeouts play an important role. But there is no explicit mechanism to
hard-limit the memory usage.
As your suggest,I test Bro use:./bro -i eth1 finger.bro&.Here I modifies finger.bro:add
"redef capture_filters += { ["finger"] = "tcp dst port 80 or tcp src port 80" };"in the begigning
of the script.Since finger.bro do a little things,then we can say the memory Bro use now is less depends
on the policy scripts.
Yes and no: By defining the filter (on script level) you told Bro to
do Transport-Layer analysis on all port 80 packets (i.e. that means
TCP-analysis via a state machine)
Then I simulate a stream of 5000/sec http connections,and after a few minutes,the same
thing happed(out of memory):(.
Any suggestion?
Sounds like the TCP statemachine of Bro needed more and more memory to
keep track of the connections it analyzes. Note that with default
config state is not discarded once Bro sees the connection
established, except by regular TCP-teardown. It may happen, that it
misses some of the connection teardowns and cumulates the state for
those connections forever (until it crashes;-))
Besides,if you can tell me where Bro malloc the memory space,and where and when to free them?
If it's really the connection-state memory that causes that problem,
than the memory is probably allocated via the construction of new
connection objects (see Sessions.cc)