RPC and NFS Analyzers


I was wondering what we should do with the RPC and NFS-Analyzers in Bro
and the ones I've written. The analyzer that currently ship with Bro are
rather incomplete (NFS supports only fstat, lookup, and getattr
procedures) and the RPC analyzer doesn't log (only through conn$service
and conn$addl) and has problems with re-syncing to streams with gaps.

My RPC analyzer extends the stock one by adding a log-file (if desired)
and doing the re-sync properly. So I think it makes sense the RPC
analyzer into master (before or after 1.6)

My NFS analyzer does not fully implement all procedures yet either but
it has skeletons for all procedures (*), can track path- and filenames,
reads/writes/creates, and extract and deliver file content to the script
layer (e.g., for libmagic). I currently don't have the cycles to fully
implement the missing procedures in the near future (the NFS analyzer
does what I need for my analysis), however, I hope to do that some stage.
The analyzer has, however, been tested with a ton of data (150GB+), is
stable and works fine.

So, even though it's not fully implemented yet its a huge improvement
over the current one (which does pretty much nothing). I think it makes
sense to merge it into master as well (alternatively we might consider
removing the NFS analyzers altogether). (Also the NFS and RPC analyzers
are in the same topic branch, so merging just one will require quite a
bit of surgery)

(*) it will report which not-implemented procedure has been called, its
size and its success status.

What are your thoughts?


Let's integrate them and I'll start writing/adapting scripts in the policy-scripts-new branch!


Sounds good to me too.


just talked some more with Robin about that. The plan(TM) is that I'll
prepare the analyzers so they can be merged into master and Seth can
then pull from master into policy-scripts-new to polish the policy scripts.


This takes many packets and presents them to bro as one very large packet 10K.

MTU Throughput Mbyutes/sec Packet Rate Bro Packet Rate
1500 470 300K 24K
512 287 560K 21K

There were no drops reported and no errors that I would normal see on the nics if I turned off coalescing.

This does require increasing the pcap snap length.

Can anyone think of any down side to running bro with packet coalescing nic's.

I think I'm lost. At first I thought you were talking about interrupt coalescing (which shouldn't cause any changes), but now I'm not so sure. Can you point me to documentation for your packet coalescing NIC and the setting you're changing in your operating system to enable it?


Gather scatter is the official term. It is on by default on the bmc 10 GigE cards.