Segmentation fault with RuleMatcher

Hi everybody !!!

As I explained in a previous mail, I'd like to log information using Bro, in particular http payloads for each connection seen on a network.
I was looking for another way than signatures to manage this. Thanks for your answers, but finally, I think signatures is not a so bad way to handle this, since it can be easily extended to other protocols by just changing port numbers in the rules and also because I can format output the way I want in a Bro script.

So, let's see the new problem... :frowning:

At the moment, I use these signatures :

signature http-request {
    ip-proto == tcp
    dst-port == 80
    payload /.*/
    event "http-request"

signature http-reply {
    ip-proto == tcp
    src-port == 80
    payload /.*/
    event "http-reply"
    tcp-state responder

signature http-effective-request {
    ip-proto == tcp
    dst-port == 80
    payload /.*/
    event "http-effective-request"
    requires-reverse-signature http-reply

In fact, I can get events for http-request, http-reply, and http-effective-request (which means Bro has effectively matched a (request, reply) couple).
Then, here is the way I manage the data in a Bro script :

event signature_match(state: signature_state, msg: string, data: string)
    if (msg == "http-request")
        current_session$req$payload = data;
    if (msg == "http-reply")
        current_session$rep$payload = data;
    if (msg == "http-effective-request")
        current_session$startTime = state$conn$start_time;
        current_session$IP_clt = state$conn$id$orig_h;
        current_session$IP_srv = state$conn$id$resp_h;

where log_info is a function I defined to log info :wink: contained in the current_session record.
Moreover, I load http-reply (so http and http-request are also loaded) and signatures modules in this script.

Now the results :

On my computer, it works perfectly, but I'm the only one generating http traffic... :wink:

But when I launch this on a real probe, I get a "Segmentation Fault" after a random time.
I dumped a core, to locate the problem, and it seems to crash in RuleMatcher::ExecRule.

So, my question : What's the problem ??? (I know there are better questions, but... :wink: )
                              Can it be due to an excessive traffic ???

Other information :
    - Traffic : about 5000 packets/s
    - HTTP traffic only : about 500 packets/s (I use a tcpdump filter to limit to this kind of traffic)
    - top command gives me : %CPU = max about 15% and %MEM = max about 3%

Thanks by advance,



can you post the entire backtrace please? Thanks.


Here is the backtrace :

#0 0x080e1f6f in RuleMatcher::ExecRule ()
#1 0x080e1f22 in RuleMatcher::ExecRuleActions ()
#2 0x080e1be1 in RuleMatcher::Match ()
#3 0x080e1cb7 in RuleMatcher::FinishEndpoint ()
#4 0x0807082a in Connection::FinishEndpointMatcher ()
#5 0x080fc690 in TCP_Connection::Done ()
#6 0x080b7f29 in HTTP_Conn::Done ()
#7 0x080efda5 in NetSessions::Remove ()
#8 0x0806daf8 in Connection::DeleteTimer ()
#9 0x0806d3b3 in ConnectionTimer::Dispatch ()
#10 0x0810d651 in CQ_TimerMgr::DoAdvance ()
#11 0x0810d278 in TimerMgr::Advance ()
#12 0x080cc55e in dispatch_next_packet ()
#13 0x080cc9f1 in net_run ()
#14 0x0804e931 in main ()
#15 0x4022e7b8 in __libc_start_main () from /lib/tls/
#16 0xbffffe7a in ?? ()
#17 0x692d006f in ?? ()
Cannot access memory at address 0x72622f2e


Christian Kreibich wrote:

I'll try to reproduce it here. Thanks for reporting this!


Sorry, it took me while to get back to this. Could you try the
attached patch and see if it helps?


bro-0.8a82-rulefix.diff (1.05 KB)

Arghhhh...It still doesn't work...(Seg Fault)

But I noticed in the patch the following lines :

if ( resp_match_state )


I'm sorry if I'm wrong, but, just looking at the structure of the code, I thought it could be "resp_match_state" instead of "orig_match_state".
In fact, I tried this and it seems to work now. :slight_smile:

How do you feel about that ???


Robin Sommer wrote:

Argh, yes, of course. Actually, I did it correctly in the first
place and then somehow messed it up just before preparing the patch
as it seems.