Suggestions on handling 1Gb/s HTTP traffic?

Hi,

I recently tested bro 2.4.1 with ~1Gb/s HTTP traffic, it works but the
processes die out of OOM within a few hours.

(The box has 16 cores and 64 GB memory, it should be enough right?)

Now I'm trying to resolve this matter, perhaps one of the following,

1. Limit the volume of traffic that bro will process
2. Tune bro

Can someone please help?

And .. what's the maximum amount of traffic you guys ever tested?

Aaron,

What OS are you running Bro on ?

Aashish

Linux, CentOS 6.3

Hi Aaron,

I run a similarly sized box (although with myricom network cards) and RedHat 6.5 that is inspecting about 3x as much total traffic.

Can you share more of your configuration? What network cards? What does /etc/sysctl.conf look like? Are selinux or auditd running? What does your bro configuration look like?

Cheers,
Harry

You need to elaborate on which processes are using memory and getting killed.

Posting this again:

Memory leaks are tricky. It is important to make a distinction about what component is using a lot of memory:

1) the workers - analyzer issues and leaks in general would show up here.
2) the proxies - communication related
3) the manager - child - if the manager is overloaded the child will buffer log data
4) the manager - parent - if a logging destination is overloaded the parent will buffer log writes

Aaron,

Have you confirmed that you're getting all of the traffic you expect?
Is the traffic simulated or real HTTP? How are you doing on-box load
balancing? PF_RING vanilla?