You probably need to take a look at the PFRINGFirstAppInstance in broctl.cfg, it defaults to 0. If you’re looking to use the second app instance created by zbalance_ipc you’ll need to set that option to 4.
Also make sure the lb_method and lb_procs are set appropriately in node.cfg file, for example:
interface=zc:99
lb_method=pf_ring
lb_procs=4 # should be equivalent to the number of instances per ‘ring'
If you really want to use zero-copy you need to add the prefix “zc:” to the physical interface name; e.g. zbalance_ipc -i zc:eth5. There are other pre-req’s for that to work, like configuring huge memory pages and installing the pf_ring-aware ZC driver.
I’ve been testing with ZC also but having issues with Bro reporting increased packet loss rates as soon I enable a configuration like this. Not sure if this is a hashing mode conflict, NIC/driver configuration issue or what.. I’d be interested to hear about your (or anyone else’s) results with such a setup.
It may be supported, but we have tested and proven similar functionality in hardware. Our hardware and software can bind specific instances of Bro (or Suricata for that matter) onto host cores - something we call flow affinity. Furthermore, those flows are load balanced anyway the user wants them.
Example config: We compile both Bro and Suricata against our pcap libraries so that they each recognize our network interface nomenclature. In the case for “Bro”, we edit the “/usr/local/bro/etc/node.cfg" file to add the interface bindings and cpu pinning for each worker thread. See below. We then use “broctl” to start Bro processing.
Your Bro config looks like it should work. From what I’ve seen that usually indicates an issue with pf_ring; possibly that zbalance_ipc is failing to run?
A couple other things to check on the pf_ring side, all of which applies to your worker nodes. Sorry if any of this obvious, just throwing out ideas:
pf_ring kernel module installed
pf_ring-aware ZC NIC driver installed and in use by the physical interface (ethtool -i)
ZC license installed
huge memory pages configured
If successful zbalance_ipc should output (when not in daemon mode or stdout/stderr redirected) something like this, followed by traffic collection stats:
Starting balancer with 8 consumer queues…
You can now attach to the balancer your application instances as follows:
Application 0
pfcount -i zc:99@0
pfcount -i zc:99@1
pfcount -i zc:99@2
pfcount -i zc:99@3
Application 1
pfcount -i zc:99@4
pfcount -i zc:99@5
pfcount -i zc:99@6
pfcount -i zc:99@7
Once zbalance_ipc is running you can use zcount_ipc as another way to validate what zbalance is doing. If you can run zcount_ipc and get packets from each of the app instances, your Bro config should work too.
That should make it work. Unfortunately pfringfirstappinstance is a global variable for broctl, but it was easier when I was implementing it and I would generally expect people to be running the same or substantially similar zbalance_ipc configurations everywhere.
Thanks a lot - it seems that my own build of Bro has been linked to
the system libpcap instead of the pf_ring one. I'm fighting with that
now, will update this thread with observations, once I get it working.