bro and pf_ring zc configuration success stories

Hi!

Anyone care to share bro + pfring success story?

What’s the speed, what NIC, what’s the configuration.

I’m running bro 2.5.1 built with jemalloc and gperftools and against pf_ring 6.6.0 with ixgbe_zc on CentOS 7.2.

In ZeroCopy mode with zbalance_ipc dividing NIC to 20 application rings (-n 20) I’m getting each CPU core loaded at 100% and around 50% packet drop (reported by netstats in broctl).

When redirecting from zc to 20 dummy interfaces (zbalance_ipc -r 0:dummy0 and so on) I’m getting around 50% load on each core and a lot less of packet drop (10% - 15%).

This is with traffic around 700 - 800 Mbit/s

All input will be highly appreciated.

Best regards
Rado

Hi!

Anyone care to share bro + pfring success story?

What's the speed, what NIC, what's the configuration.

I'm running bro 2.5.1 built with jemalloc and gperftools and against pf_ring 6.6.0 with ixgbe_zc on CentOS 7.2.

You can't be using both jemalloc and gperftools(tcmalloc).. they are both malloc implementations.

In ZeroCopy mode with zbalance_ipc dividing NIC to 20 application rings (-n 20) I'm getting each CPU core loaded at 100% and around 50% packet drop (reported by netstats in broctl).

Sounds like the load balancing is not working right and you are just analyzing all of your traffic 20 times. What does your node.cfg contain?

When redirecting from zc to 20 dummy interfaces (zbalance_ipc -r 0:dummy0 and so on) I'm getting around 50% load on each core and a lot less of packet drop (10% - 15%).

This is with traffic around 700 - 800 Mbit/s

A few workers should be able to handle this load, not to mention 20..

All input will be highly appreciated.

Can you install bro-pkg (1. Quickstart Guide — Zeek Package Manager Documentation) and then do

bro-pkg install bro-doctor --version 1.16.1
broctl doctor.bro

And share the results.

Hi!
Thank you for your reply.

In ‘full zerocopy’ mode:

zbalance_ipc cluster-27.conf:

https://gist.github.com/radoslawc/afa7293fde9ba5bc9f51640d5fc63005

node.cfg:

https://gist.github.com/radoslawc/c7406452f01c14caa43c729c164d701b

bro doctor output for above setup:

https://gist.github.com/radoslawc/bb3e608dfa7ceca97378c26e98520fae

Bro doctor states that bro binary is not linked against pfring (which is correct, as configure doesn’t give this option) instead I’ve used pf_ring plugin from aux:

Bro-PF_RING.linux-x86_64.so
user@u1604:/opt/bro/lib/bro/plugins/Bro_PF_RING/lib$ ldd Bro-PF_RING.linux-x86_64.so
linux-vdso.so.1 => (0x00007ffdd37f1000)
libpfring.so => /usr/local/lib/libpfring.so (0x00007f85dbd5e000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f85db9dc000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f85db7c6000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f85db3fc000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f85db1df000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f85dafd7000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f85dadd3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f85daaca000)
/lib64/ld-linux-x86-64.so.2 (0x00007f85dc1dc000)

I’ll rebuild bro with gperftools only, thank you for pointing that out.

Best regard
Rado

Hi!
Thank you for your reply.

In 'full zerocopy' mode:

zbalance_ipc cluster-27.conf:

zbalance_ipc cluster-27.conf · GitHub

node.cfg:

node.cfg for zerocopy · GitHub

bro doctor output for above setup:

bro doctor for zerocopy · GitHub

Ah.. so this is not good:

error: 99.17%, 7562 out of 7625 connections are half duplex

And this is not great either:

ok, only 0.00%, 0 out of 13 connections appear to be duplicate

It only looked at 13 connections because there were only 13 bidirectional connections in the log.

I think your problem is this:

interface=zc:27

That should not actually work with the pf_ring plugin.. in order to use the pf_ring plugin the interface needs to start with pf_ring:: I believe you need

interface=pf_ring::zc:27

So try that and see if that fixes everything. If not, can you remove lb_procs and move to one worker for now to at least verify that that configuration works.

Bro doctor states that bro binary is not linked against pfring (which is correct, as configure doesn't give this option) instead I've used pf_ring plugin from aux:

Bro-PF_RING.linux-x86_64.so
user@u1604:/opt/bro/lib/bro/plugins/Bro_PF_RING/lib$ ldd Bro-PF_RING.linux-x86_64.so
        linux-vdso.so.1 => (0x00007ffdd37f1000)
        libpfring.so => /usr/local/lib/libpfring.so (0x00007f85dbd5e000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f85db9dc000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f85db7c6000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f85db3fc000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f85db1df000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f85dafd7000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f85dadd3000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f85daaca000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f85dc1dc000)

Ah, that is correct. I need to have it separately check to see if bro -N lists the pf_ring plugin.

If the pf_ring::zc thing fixes things, I'll fix bro-doctor to check for that.

I think the check needs to be that if bro -N lists the pf_ring plugin, the interface MUST start with pf_ring::

The bro pf_ring plugin should probably do the same check.. I think there are a few issues with the pf_ring plugin. I'm working on fixing one issue that causes the plugin to be broken if you are not using ZC.

Hi!

I’ve rebuilt bro with gperftools only.

With worker defined like this:

[worker-1]
type=worker
host=localhost
interface=pf_ring::zc:27
lb_method=pf_ring
lb_procs=20

all worker threads fail with below message:

==== stderr.log

fatal error: problem with interface pf_ring::zc:27 (No such device)

with zbalance_ipc stopped and using NIC device:

[worker-1]
type=worker
host=localhost
interface=pf_ring::zc:enp5s0f0
lb_method=pf_ring
lb_procs=20

only one worker thread starts:

[BroControl] > status
Name Type Host Status Pid Started
logger logger localhost running 3886 28 Sep 09:38:30
manager manager localhost running 4063 28 Sep 09:38:32
proxy-1 proxy localhost running 4384 28 Sep 09:38:34
proxy-2 proxy localhost running 4386 28 Sep 09:38:34
worker-1-1 worker localhost stopped
worker-1-2 worker localhost stopped
worker-1-3 worker localhost running 4751 28 Sep 09:38:36
worker-1-4 worker localhost stopped
worker-1-5 worker localhost stopped
worker-1-6 worker localhost stopped
worker-1-7 worker localhost stopped
worker-1-8 worker localhost stopped
worker-1-9 worker localhost stopped
worker-1-10 worker localhost stopped
worker-1-11 worker localhost stopped
worker-1-12 worker localhost stopped
worker-1-13 worker localhost stopped
worker-1-14 worker localhost stopped
worker-1-15 worker localhost stopped
worker-1-16 worker localhost stopped
worker-1-17 worker localhost stopped
worker-1-18 worker localhost stopped
worker-1-19 worker localhost stopped
worker-1-20 worker localhost stopped

rest of them are failing with message:

==== stderr.log

fatal error: problem with interface pf_ring::zc:enp5s0f0 (Bad address)

Best regards

Rado

Do you have the pf_ring plugin installed. Do you see this output?

$ bro -N | grep -v built-in
Bro::PF_RING - Packet acquisition via PF_RING (dynamic, version 1.0)

Yes, plugin is installed,
root@u1604:~# /opt/bro/bin/bro -N | grep -v built-in
Bro::PF_RING - Packet acquisition via PF_RING (dynamic, version 1.0)

with worker definition:

[worker-1]
type=worker
host=localhost
interface=zc:27
lb_method=pf_ring
lb_procs=20

I’ve double checked now and I’m able to start and all 20 threads are reported to be running in broctl.

Best regards
Rado

Yes, but the plugin is only actually used when you have interface=pf_ring::...

If you are using interface=zc:27 then you're just opening the zc: interfaces using libpcap.

According to http://www.ntop.org/pf_ring/best-practices-for-using-bro_ids-with-pf_ring-zc-reliably/. You should run zbalance_ipc using dummy interfaces like

-r 0:dummy0 -r 1:dummy1 -r 2:dummy2 -r 3:dummy3

Then you would configure bro like

[worker-0]
type=worker
host=localhost
interface=pf_ring::dummy0
pin_cpus=1

[worker-1]
type=worker
host=localhost
interface=pf_ring::dummy1
pin_cpus=2

[worker-2]
type=worker
host=localhost
interface=pf_ring::dummy2
pin_cpus=3

[worker-3]
type=worker
host=localhost
interface=pf_ring::dummy3
pin_cpus=4

Hi!
Yes this was my initial setup (with dummy interfaces), I’ve used worker definition like you’ve suggested (pf_ring::dummy{0…19}) - before I was using interface=dummy{0…19}
It works, with the same traffic replayed, netstats:

https://gist.github.com/radoslawc/4ca4d2f8bb0e7a2e5763d53eb31b59de

so almost no drops,

capstats returns nothing with interface=pf_ring::dummy{0…19}, with interface=dummy{0…19} it worked, but that’s not the issue.

Here’s htop btw:
https://imgur.com/a/99ETo

My question is with using dummy interfaces, doesn’t it defeat purpose of zerocopy? It has to pass packets trough kernel to dummy interface.

Also I’ve used worker definition for 20 of them:

[worker-0]
type=worker
host=localhost
interface=pf_ring::zc:27@0
pin_cpus=1

and result was identical as with using:

[worker-0]
type=worker
host=localhost
interface=zc:27
lb_method=pf_ring

lb_procs=20

meaning all used cores loaded at 100% and instant high packet drop:

netstats from broctl:
https://gist.github.com/radoslawc/c7d5c97fe443b1bed62ca4025249a342

Best regards
Rado

Hi!
Yes this was my initial setup (with dummy interfaces), I've used worker definition like you've suggested (pf_ring::dummy{0..19}) - before I was using interface=dummy{0..19}
It works, with the same traffic replayed, netstats:

netstats with pf_ring::dummyX · GitHub

so almost no drops,

capstats returns nothing with interface=pf_ring::dummy{0..19}, with interface=dummy{0..19} it worked, but that's not the issue.

Here's htop btw:
https://imgur.com/a/99ETo

Initially you said

This is with traffic around 700 - 800 Mbit/s

Did you mean 700 megabits/sec or megabytes/sec ?

At 700 Mbits/sec I'd expect the load on 20 workers to be almost nothing. What model CPU is in this box?

My question is with using dummy interfaces, doesn't it defeat purpose of zerocopy? It has to pass packets trough kernel to dummy interface.

It's what they recommend, so it's probably fine...

Another issue I see with your configuration is that you are passing -g=2 to zbalance_ipc, which tells it to bind to core 2. You should specifically bind zbalance_ipc and bro to different cores.

I'm also not sure what the -n=20,1 does and if that should just be -n=20.

Also I've used worker definition for 20 of them:

[worker-0]
type=worker
host=localhost
interface=pf_ring::zc:27@0
pin_cpus=1

and result was identical as with using:

[worker-0]
type=worker
host=localhost
interface=zc:27
lb_method=pf_ring
lb_procs=20

Can you just run 4 workers and see how it works? You don't need 20 cores to handle 700mbit. I just checked one of our worker boxes that is currently getting around 4000mbit with 14 workers and the cpus are at about 70%

That’s megabits, as reported by capstats total.

cpu is:
https://gist.github.com/radoslawc/376ddb061354aec40e376214f6d830cc

nic is:

05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

-n=20,1 creates two applications, one with all traffic divided to 20 rings (so effectively you’ve got 20 interfaces to attach 20 processes/threads whatever you’ve got) to read packets, and another application to attach one process to consume the same traffic (for example zcount).

Netstats is reporting 700 megabits per second (which I’ve assumed is amount of traffic bro handles and drops rest, or am I wrong?), traffic that this sensor receives is from 2 to 3.5 Gbit/s from Ixia’s Breaking Point traffic generator.

I’ve moved zbalance_ipc to core #1 and started bro with 4 workers:

[worker-0]
type=worker
host=localhost
interface=pf_ring::zc:27@0
pin_cpus=2

bound to cpu # 2,3,4,5

maybe 30 seconds into test:

[BroControl] > capstats

Interface kpps mbps (10s average)

Ah... if you are sending bro random traffic from a traffic generator then it is not going to work well at all.

Can you configure your traffic generator to send it "real" traffic?

Can you configure your traffic generator to send it “real” traffic?

that’s the setup, it is even called Real-World Traffic ™ by vendor. currently that’s the only way for me to have somewhat reproducible test results in my setup.

Can you set the rate to 200mbit then for a bit? You need to get things to a point where the workers are running properly without drops.

Then once the configuration looks correct and bro is logging proper connections you can start ramping the rate back up.

Based on the "error: 99.17%, 7562 out of 7625 connections are half duplex" from before, nothing was working properly... and 50% drops alone wouldn't cause that.

Will do, I’ll get back with results tomorrow as my day ended. Thanks for your help so far.

Hi!
I’m back with results. I’ve created new test and ran 200 Mbits, 600 Mbit, 1Gbit then went all in with 8 Gbits.

  1. You were right with traffic generator, previous test had some parameters changed and was doing something funky with TCP. I’ve removed this and above issues are to some extent gone.
  2. With zbalance_ipc -n 20 and worker definition:

[worker-0]
type=worker
host=localhost
interface=pf_ring::zc:27@0
pin_cpus=1

I’m able to process 4.5 Gbit/s with all 20 cores loaded at 60 - 70 % with minimal drop at bro

[BroControl] > netstats

worker-0: 1506695586.298096 recvd=5465310 dropped=30118 link=5465310

worker-1: 1506695586.497686 recvd=5438281 dropped=9041 link=5438281

worker-2: 1506695586.701504 recvd=5498208 dropped=8756 link=5498208

worker-3: 1506695586.901398 recvd=5457893 dropped=9326 link=5457893

worker-4: 1506695587.101722 recvd=5472315 dropped=8877 link=5472315

worker-5: 1506695587.301448 recvd=5541810 dropped=10604 link=5541810

worker-6: 1506695587.501405 recvd=5556953 dropped=2022 link=5556953

worker-7: 1506695587.705590 recvd=5508997 dropped=2149 link=5508997

worker-8: 1506695587.905592 recvd=5526052 dropped=1955 link=5526052

worker-9: 1506695588.105445 recvd=5506942 dropped=2751 link=5506942

worker-10: 1506695588.305863 recvd=5597609 dropped=7534 link=5597609

worker-11: 1506695588.505499 recvd=5550657 dropped=4975 link=5550657

worker-12: 1506695588.705426 recvd=5578005 dropped=1152 link=5578005

worker-13: 1506695588.905554 recvd=5541178 dropped=90 link=5541178

worker-14: 1506695589.109446 recvd=5561273 dropped=3568 link=5561273

worker-15: 1506695589.309585 recvd=5552211 dropped=2850 link=5552211

worker-16: 1506695589.509799 recvd=5524173 dropped=7896 link=5524173

worker-17: 1506695589.709838 recvd=5565320 dropped=10923 link=5565320

worker-18: 1506695589.910352 recvd=5632122 dropped=9169 link=5632122

worker-19: 1506695590.113969 recvd=5603647 dropped=10448 link=5603647

this drop occured at the beginning of test and stayed like this until end (20 minutes)

with zbalance_ipc - n 20 -r 0:dummy0 and so on for 20 workers defined like this:

[worker-0]
type=worker
host=localhost
interface=pf_ring::dummy0
pin_cpus=1

I can process at around 3 Gbit/s and around 36 % of packets are dropped at zbalance_ipc ingress (ixgbe NIC) (so it seems that bottleneck here is zc - > dummy packets processing)
Core designated for zbalance_ipc is loaded 100% during test , I’ll look into it next.

So so far so good.
I’ll be posting updates on my findings

I’m very grateful for your help.
Thank you.

Best regards
Rado

Hi radek,

Would you like to test using NETMAP too? It’s fairly easy to get going with it these days and I’d be more than happy to give you hand. Seems worthwhile as a comparison point at least and it shouldn’t take very long to get it setup.

.Seth

Hi!
Sure, this looks interesting, let me get back to you once setup is ready.
Best regards
Rado