Using BRO for measuring TCP flow bandwidth

Hello,

I am a beginner to BRO IDS and am currently using it for monitoring one interface of a FreeBSD machine over an experiment network.

Part of my project now requires to also capture the network bandwidth being utilized by a flow that passes thorough the BRO monitored interface. By flow we mean, a source-destination IP pair.

Is this kind of measurement possible in BRO? If not, is there any add-on which can be used to accomplish the same task using BRO?

Kindly suggest and thanks in advance.

Regards,
Harkeerat Bedi

capstats(included with bro) can do this:

    capstats -I 5 -i eth0 -f 'host a.b.c.d and host e.f.g.h'

Thank you for your prompt assistance Justin. I will look into this.

Regards,
Harkeerat Bedi

If you are looking to get averages over the tcp session, look at the conn.bro file. It records enough information for you to derive the average throughput in either direction over the life of the connection. You can change the routine “record_connection” to calculate the avg. throughput in each direction.

sridhar

Thank you Sridhar. I think you what you mentioned is kind of what I am trying to do. Allow me to look into the conn.bro file and I will update here accordingly.

Thank you once again.

Regards,
Harkeerat Bedi

Hello,

Thank you once again for your suggestions. I have been going through the Reference Manual, the conn.bro, and the methods in that file. I also went through some examples from the Bro 2007 workshop.

I am able to obtain the flow duration and amount of data transferred as I want. However I am facing one issue which is explained below. Following is what I have done, kindly suggest.

  1. I have created one policy file called “ex2e.bro” and rewritten the connection_established method:

event connection_established(c: connection)
{
local id = c$id;
local log_msg =
fmt("%.6f %.6f %s %s %d %d %d %d ",
c$start_time, c$duration, id$orig_h, id$resp_h,
id$orig_p, id$resp_p, c$orig$size, c$resp$size);
print log_msg;
schedule 5 sec { connection_established(c) };
}

In the above policy, I call same method every 5 seconds and the connection values are printed.

  1. I have one tcpdump which contains one tcp flow from 10.1.1.3 to 10.1.2.3. BRO and TCPDUMP run on an intermediate node which is analyzing this flow. This flow was started after BRO was started and was ended after BRO was ended.

  2. I use my created policy file “ex2e.bro” on that tcpdump using the command:

sudo /…/bro -r testCapture4.dump ex2e.bro weird alarm | /…/cf

Aug 19 13:23:21 0.001304 10.1.1.3 10.1.2.3 50191 5001 0 0
Aug 19 13:23:21 4.986504 10.1.1.3 10.1.2.3 50191 5001 593704 0
Aug 19 13:23:21 10.001823 10.1.1.3 10.1.2.3 50191 5001 1193176 0
Aug 19 13:23:21 14.993030 10.1.1.3 10.1.2.3 50191 5001 1789752 0
Aug 19 13:23:21 20.016351 10.1.1.3 10.1.2.3 50191 5001 2389224 0
Aug 19 13:23:21 25.007562 10.1.1.3 10.1.2.3 50191 5001 2985800 0
Aug 19 13:23:21 29.998899 10.1.1.3 10.1.2.3 50191 5001 3582376 0
Aug 19 13:23:21 35.014104 10.1.1.3 10.1.2.3 50191 5001 4181848 0
Aug 19 13:23:21 40.005321 10.1.1.3 10.1.2.3 50191 5001 4778424 0
Aug 19 13:23:21 45.020655 10.1.1.3 10.1.2.3 50191 5001 5377896 0
Aug 19 13:23:21 50.012500 10.1.1.3 10.1.2.3 50191 5001 5974472 0
Aug 19 13:23:21 55.027839 10.1.1.3 10.1.2.3 50191 5001 6573944 0
Aug 19 13:23:21 58.371315 10.1.1.3 10.1.2.3 50191 5001 6973592 0

As we can see, the duration of the connection is updated every 5 seconds (as the method is called every 5 seconds.)
Also, the amount of originator’s bytes sent are incremented accordingly . This is what I wanted.

  1. However, when I run the same command on actual network traffic, that is:

$ sudo /…/bro -i em2 ex2e.bro weird alarm

I do not see similar kind of output. Following is what I observe:
pcap bufsize = 32768
listening on em2
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0
1282249401.443512 0.001360 10.1.1.3 10.1.2.3 50191 5001 0 0

As we can see, both the duration and originator’s bytes sent are not incremented.

Shouldn’t the duration and the originator’s bytes sent increment the same way as it did on the tcpdump because I am using the same commands? Am I missing something?

Also is this approach of modifying the connection_established() correct? I went with this approach as it worked on the tcpdump. I am interested in obtaining the duration of an ongoing tcp flow, and the amount of bytes transferred over an actual network so far in a periodic manner before the connection is closed.

Kindly provide your suggestions.

Thank you,

Regards,
Harkeerat Bedi

Anyone?

My question is why does BRO appear to behave differently when reading from a tcpdump or an interface. Kindly advice.

Thanks,
Harkeerat Bedi

My question is why does BRO appear to behave differently when reading from a
tcpdump or an interface. Kindly advice.

It's not clear to me just why you're seeing the difference. The symptoms
suggest that the live run is using a different packet filter (in particular,
the default SYN/FIN/RST-only filter), and thus after the connection is
established, there's no input to update things further. However, if so
then you should have that same effect running on the trace.

You could test for this by running with a filter "-f tcp", which will
capture all TCP packets.

Note that your script misuses the connection_established event. It's not
meant to be generated at the script-level, and the semantics of executing
it again 5 seconds in the future are undefined. (Also, timing for executing
such scheduled events is actually driven by the arrival of traffic, so
that would be another potential difference between the live execution vs.
the trace one. But again I don't offhand see why it would lead to different
results.)

    Vern

Sorry, i don’t know why you are seeing a difference between the live and offline trace.

Here is something i have used in the past to look at similar things you are trying to get at. I expanded the connection structure to include a last recorded timestamp. Then use the new_packet or tcp_packet event to compare network_time to the last recorded time (or connection close) to do the printing vs the timer event.

Sridhar

Thank you Vern for your feedback. I am still working on this problem. I tested your suggestion of using “-f tcp”, but I could not see any difference.

This post is divided in three parts:

  1. I have one basic question regarding the way BRO works. My confusion is as follows:

Which is the main event handler in BRO that “usually” updates the c$duration, c$orig$size and c$resp$size variables of the connection object?
In my case I believe, the event handler for updating the duration and sizes for the connection when a new packet is observed is not being called when BRO is reading from live traffic. However, it is being called with BRO reads from TCPDUMP.

  1. Regarding your suggestion on my use and invocation of the connection_established event, I have made some changes to my policy file and attached the same to this mail. Can you kindly provide your feedback on this.
    I do not schedule the connection_established event anymore, I have defined another event (conn_bitrate_updater), which I schedule from the connection_established event. I have also tried to define the semantics of future execution of this event.

  2. Regarding your last comment: "Also, timing for executing
    such scheduled events is actually driven by the arrival of traffic, so
    that would be another potential difference between the live execution vs.

the trace one"
Can you suggest how can I address this scenario?

Kindly provide your feedback regarding the above.

Thank you,

Regards,
Harkeerat Bedi

ex2e.bro (1.17 KB)

Thank you Sridhar for your feedback and time. I am still working on this problem. To recall, in my case BRO seemed to be behaving differently when reading from a live interface or a TCPDUMP.

My previous experiment setup was as follows.

Setup1:
Node1 (Client) <------> Node2 (running BRO) < ------ > Node3 (Server)

Node1 (the client) was sending TCP traffic to Node3 (the server) via Node2 (which is running BRO). And the output of BRO on this live traffic was not as expected. However, if I ran BRO on the TCPDUMP of the same traffic the output was as expected. (As shown in my previous email).

Now, as I was experimenting, I noticed that if I rearrange the experiment as follows:

Setup2:
Node1 (Client) <------> Node2 (Server + running BRO)

I obtain the results as I wanted even on live traffic. I am able to obtain the periodically updated duration and sizes which then I use to calculate the bitrate.

I was wondering, is the difference in behavior observed by BRO related to its location in the network?
(In Setup1 it was an intermediate node, where as in Setup2 it is the terminal node.)
Kindly provide your feedback if this makes any sense of gives any clues.

Regarding your suggestion, I understand what you are implying, however I am not sure how to do it. Is it possible for you to provide me with a snippet of code so that I can follow it?

Thank you in advance.

Regards,
Harkeerat Bedi

Below is the gist of what i was suggesting. You need to clean it up so that you catch the pre connection established phase and connection close case etc but you get the idea.

global conn_print: table[conn_id] of time;
global print_delay = 1 sec;

event connection_established(c: connection)
{
conn_print[c$id] = network_time();
}

event new_packet (c: connection,p: pkt_hdr)
{
if ( network_time() >= print_delay + conn_print[c$id] ) {
print network_time(), c$id, c$orig$size, c$resp$size;
conn_print[c$id] = network_time();
}
}

Sridhar

My previous experiment setup was as follows.

Setup1:
Node1 (Client) <------> Node2 (running BRO) < ------ > Node3 (Server)

If on Node2 instead of running Bro you capture packets with tcpdump, does
Bro run correctly on the resulting trace? (Perhaps this is how you're
already capturing the traffic that it works correctly on, but I thought
of asking because on some systems packet capture for local traffic is
incomplete, and in particular lacks locally sent packets.)

What OS's are the Nodes running?

    Vern

Which is the main event handler in BRO that "usually" updates the
c$duration, c$orig$size and c$resp$size variables of the connection object?

It does so on any connection_* event that it generates. However, in between
those events, the variables are *not* updated. (That is, their updates
are driven by the generation of the events.)

Looking at the code, it appears that the new_packet event will also spur
an update, so capturing that should suffice.

In addition, there's a connection_status_update(c: connection) event that
you can turn on by defining a handler for it, and by setting
connection_status_update_interval to a positive time interval (e.g., "1 sec").
That will then be generated periodically at the given interval.

2. Regarding your suggestion on my use and invocation of the
connection_established event, I have made some changes to my policy file and
attached the same to this mail. Can you kindly provide your feedback on
this.

The way you structured it now looks good, modulo the consideration above
of when the variables actually get updated. That said, just using
connection_status_update directly would be simpler.

    Vern

My previous experiment setup was as follows.

Setup1:
Node1 (Client) <------> Node2 (running BRO) < ------ > Node3 (Server)

If on Node2 instead of running Bro you capture packets with tcpdump, does
Bro run correctly on the resulting trace?

  1. Yes. The command that I use is:
    $ sudo /usr/local/…/bin/bro -r …/testCapture6.dump ex2e.bro

Task: I am ftp’ing one file from Node 1 to Node3.

Snippet of output:
10.1.2.3 10.1.1.3 20 57713 bitrate: 117337.34, duration: 3.011079, size: 0 353312

10.1.1.3 10.1.2.3 43580 21 bitrate: 64.74, duration: 6.889347, size: 105 446
10.1.2.3 10.1.1.3 20 57713 bitrate: 117722.29, duration: 4.022144, size: 0 473496
10.1.1.3 10.1.2.3 43580 21 bitrate: 64.74, duration: 6.889347, size: 105 446
10.1.2.3 10.1.1.3 20 57713 bitrate: 117969.47, duration: 5.020214, size: 0 592232
10.1.1.3 10.1.2.3 43580 21 bitrate: 64.74, duration: 6.889347, size: 105 446
10.1.2.3 10.1.1.3 20 57713 bitrate: 118139.81, duration: 6.030279, size: 0 712416

Notice the increase in size and duration every one second. This is as expected.

  1. When I run the following command (that is reading from an interface “em2”):

$ sudo /usr/local/…/bin/bro -i em2 ex2e.bro

Task: Same as before (I am ftp’ing one file from Node 1 to Node3).

Snippet of output observed:

10.1.2.3 10.1.1.3 20 47271 bitrate: 0.00, duration: 0.003685, size: 0 0
10.1.1.3 10.1.2.3 36270 21 bitrate: 0.00, duration: 0.001420, size: 0 0
10.1.2.3 10.1.1.3 20 47271 bitrate: 0.00, duration: 0.003685, size: 0 0
10.1.1.3 10.1.2.3 36270 21 bitrate: 0.00, duration: 0.001420, size: 0 0
10.1.2.3 10.1.1.3 20 47271 bitrate: 0.00, duration: 0.003685, size: 0 0
10.1.1.3 10.1.2.3 36270 21 bitrate: 0.00, duration: 0.001420, size: 0 0
10.1.2.3 10.1.1.3 20 47271 bitrate: 0.00, duration: 0.003685, size: 0 0
10.1.1.3 10.1.2.3 36270 21 bitrate: 0.00, duration: 0.001420, size: 0 0
1283232336.474291 8.932614 10.1.2.3 10.1.1.3 20 47271 0 1052696 - TCP_CLOSED

Notice that size and duration do not increase every second. But when I stop the file transfer, I see updated values.

  1. One more thing I noticed is that:
    When I run my policy file along with TCP and FTP analyzers on the live interface using below command.
    Command:
    $ sudo /usr/local/…/bin/bro -i em2 ex2e.bro tcp ftp

Task: Same as before (I am ftp’ing one file from Node 1 to Node3).

I see the following output:
Snippet:

1283232724.432981 0.001834 10.1.1.3 10.1.2.3 53747 21 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 0.00, duration: 0.001834, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 10608.46, duration: 0.007824, size: 0 83
10.1.1.3 10.1.2.3 53747 21 bitrate: 10608.46, duration: 0.007824, size: 0 83
10.1.1.3 10.1.2.3 53747 21 bitrate: 63.43, duration: 2.222891, size: 16 141
10.1.1.3 10.1.2.3 53747 21 bitrate: 72.37, duration: 3.150419, size: 29 228
10.1.1.3 10.1.2.3 53747 21 bitrate: 72.37, duration: 3.150419, size: 29 228
10.1.1.3 10.1.2.3 53747 21 bitrate: 41.28, duration: 6.007643, size: 37 248
1283232730.452595 0.003124 10.1.2.3 10.1.1.3 20 40035 0 0
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 56.27, duration: 6.024612, size: 76 339
10.1.2.3 10.1.1.3 20 40035 bitrate: 0.00, duration: 0.003124, size: 0 0
10.1.1.3 10.1.2.3 53747 21 bitrate: 31.48, duration: 12.549360, size: 76 395
1283232730.452595 6.530739 10.1.2.3 10.1.1.3 20 40035 0 770336 - TCP_CLOSED
10.1.1.3 10.1.2.3 53747 21 bitrate: 31.48, duration: 12.549360, size: 76 395
10.1.1.3 10.1.2.3 53747 21 bitrate: 31.48, duration: 12.549360, size: 76 395
1283232724.432981 15.582105 10.1.1.3 10.1.2.3 53747 21 82 409 - TCP_CLOSED

Notice the duration and size variables of the control connection (port 21) update every time I enter a new ftp command. (e.g. ls - to list the files in that remote directory). This did not happen earlier - when I did not use TCP and FTP analyzers. And when I stop the transfer and close the connection, I see the total duration and size.

I think that I am missing some event handlers but I cannot figure out which ones. I even tried running BRO with “brolite” (which loads many of the standard analyzers) along with my policy file, but in vain.

(Perhaps this is how you’re
already capturing the traffic that it works correctly on, but I thought
of asking because on some systems packet capture for local traffic is
incomplete, and in particular lacks locally sent packets.)

What OS’s are the Nodes running?

Node 2 and 3 are FreeBSD 7.2
Node 1 is Ubuntu 10.04

Vern

Thank you.
Harkeerat Bedi

Thank you for this Sridhar. Allow me to work on this.

Thanks,
Harkeerat Bedi

have you tried

    sudo /usr/local/.../bin/bro -i em2 ex2e.bro -f ip

yet? I believe it was suggested a couple of messages back.

Thank you Sridhar, Vern and Justin for your time and support. It is highly appreciated. I was finally able to get the output I required - that is, scheduled updates on the size and duration variables of the connection object on live traffic.

I had tried -f filter option for TCP (-f tcp) earlier when it was suggested, but it did not help then.

The trick was to use the -f filter option for either IP or TCP along with the definition of an event handler like new_packet or tcp_packet in the policy file. Using just the event handler without the filter, or just the filter without the event handler did not work.

As Vern mentioned, the updates on the variables are driven by the generation of the events, therefore even by defining an empty new_packet or tcp_packet event handler, the connection object was getting updated and I was able to get the desired output.

Thank you all once again, as without your support it would not have been possible.

Thanks,
Harkeerat Bedi