Good afternoon, list. I'm hoping to get a quick opinion on some hardware. I've done some brief looking and not really found what I'm seeking so I'll post here in hopes that one of you can share some experience.
I'm exploring deployment of some Bro boxes and was hoping to leverage a great deal that Sun is offering to get the hardware. I know that the boxes can do what I need them to do, as I've worked on Bro implementations elsewhere. What I'd really like to know is if anyone has used the Sun (Intel Chipset 82598) dual port 10g cards? They're a decent savings of capitol, but I'd rather just spend the money to get the cards I'm used to (single port 10g Intel or Myricom) if the dual port cards behave strangely or are a time-vortex to get working.
I'm making an assumption that the dual port cards operate similar to the single port cards. Has anyone used these in a bro deployment?
Thanks,
nb
- ---
Nick Buraglio
Network Engineer, CITES, University of Illinois
GPG key 0x2E5B44F4
Phone: 217.244.6428
buraglio@illinois.edu
Hi Nick,
Another hardware option is the Bivio platform (http://www.bivio.net).
First I should make a disclaimer that I work for the company.
We offer a hardware platform that is designed for DPI applications like
Bro. The system is really a networking platform and much different than
off the shelf hardware you would get from Sun. The Bivio system is
designed to deal with traffic at 10Gb/s (or more with scaling) and comes
with configurable interfaces that range from 1 G copper to 10 G fiber.
The Bivio system is PowerPC Linux based so it is fairly trivial to port
Bro or any pcap based application to our platform. I have ported it in
the past and built RPMs, and I'm currently looking forward to the
cluster release of Bro in the 1.5 version as it is an extremely good fit
for the distributed architecture design of our hardware.
I would highly recommend taking a looking if your goal is to not only
use 10G interfaces, but to be able to deal with that 10G of traffic.
Cheers,
// Joel
Joel Ebrahimi
Solutions Engineer
Bivio Networks Inc.
http://www.bivio.net
I'd be careful about purchasing 10G NICs for packet capture. I have not
been able to configure a FreeBSD 6.3 system with a Myricom Myri-10G NIC
to reliably capture traffic on a lightly loaded link (~2Mb/s, ~240
kpps). One option I'm interested in trying is the Endace DAG,
<http://www.endace.com/dag-network-monitoring-cards.html>. Does anyone
have experience using these cards with bro?
Nick Buraglio wrote:
Hi Sean:
Back in 2006 we got 4 Dag 6.2SE cards to monitor our 10G links. During the time we were running firmware 2.5.7.5. on the cards. We had real hard time keeping Bro running reliably in a sustained manner using Dag cards. We encountered a lot of issues - including lack of drivers, lack of built in support for libpcap, crashing of Bro repeatedly, heating up and crashing of system as well.
In fact, Robin helped us quite a bit and even wrote drivers and support for Dag in Bro. Endace support was prompt too and they provided us with a new modified firmware but not much changed.
During all that time, For production Bro we relied on a pair of Intel 10G cards while we resolve this issue with Dag cards (spent considerable time trying to get this working),
All in all, we had lot of issues running Dag capture cards reliably. Eventually, we gave up and got Myricom 10G cards. We have been quite happy with Myricom cards and have not encountered any issues since.
Hope this helps,
Aashish Sharma
NCSA
Your DAG experience is interesting. We demoed the 6.2SE’s and they seemed to run OK on libpcap apps for a few days in late 2006. We’ve been running the smaller 1 Gb cousin, the 4.5G2, in production since then with zero stability problems with libpcap apps. Link size is 1 Gb physical, 450 Mb/sec typical load. In my experience though, the difference maker is rarely in getting the packets to the CPU, but rather in the CPU grepping through the packets fast enough. I anticipate that the Bro cluster work will do more for full snaplength processing than hardware acceleration will unless someone writes Bro for Nvidia’s CUDA like they wrote Snort for CUDA with Gnort.
–Martin
Martin Holste wrote:
Your DAG experience is interesting. We demoed the 6.2SE's and they
seemed to run OK on libpcap apps for a few days in late 2006. We've
been running the smaller 1 Gb cousin, the 4.5G2, in production since
then with zero stability problems with libpcap apps. Link size is 1 Gb
physical, 450 Mb/sec typical load. In my experience though, the
difference maker is rarely in getting the packets to the CPU, but rather
in the CPU grepping through the packets fast enough. I anticipate that
the Bro cluster work will do more for full snaplength processing than
hardware acceleration will unless someone writes Bro for Nvidia's CUDA
like they wrote Snort for CUDA with Gnort.
I recommend these cards available from nPulse networks. [1] (Napatech is
the OEM). They have more features than the Endace cards and twice the
port density. And, they fully support FreeBSD. Despite my numerous
requests it seems Endace maintains that there will not be future support
for FreeBSD due to lack of demand. To the best of my knowledge, the
last official supported FreeBSD version from Endace is the 6.x train.
Anyhow that's my personal gripe.
[1] http://www.npulsenetworks.com/
Napatech 2x10GE NT20E
http://www.napatech.com/products/capture_adapters/2x10g_pcie_nt20e.html
And when it's available, the NTNPU20E looks like a very exciting
complement to the NT20E's. It was displayed at Interop but is still a
few months out from release.
http://www.napatech.com/products/inspect_adapters.html
HTH,
--Jason
One thing I noticed with the NT20E is that the web site states that "20
Gbps throughput @ 64 bytes". I'm assuming that this means that the
device only captures 64 bytes of the data section of a packet. I also
assume this is configurable. For some things that's fine, but in most
NIDS (such as Bro, snort, etc) you usually want the whole packet.
What are you using in terms of capture size and bandwidth, if you don't
mind me asking?
- Jason
Jason Chambers wrote:
The tech sheet says otherwise. "Full-line-rate processing for all
frames from 64 bytes to 10.000 bytes".
http://www.napatech.com/uploads/c_file/21_file_6159.pdf
I cannot comment on our setup at the moment as hardware is pending.
--Jason
Jason Carr wrote:
Something I found out about these cards is they are PCIe v1.1.. "PCIe
v2.0 compatible" doesn't mean what I thought it did. So even with a
PCIe v2.0 system you can only get 12.5 Gbps from the card.
No idea at the moment when they will have a 2.0 version.
--Jason
Jason Carr wrote:
I actually did quite a bit of the work with Aashish on the Dag and Myricom cards (I was the one that gave them to him back when I still worked at NCSA), and like he said we had lots of issues with them. Endace support was helpful but in the end it was a more supportable direction to go with the Intel and Myricom cards.
Using NICs has proven to be very robust for us. I have the cards that I'd originally sent the mail out about running on a FreeBSD 7.2 system watching pretty heavily loaded links and so far have not seen any issues.
nb
TACC is using the Sun dual port cards.
The system runs bro cluster with ip filters to break the traffic up into multiple ip quadrants this allow different cpu to work on each quadrant of ip space.
My rule of thumb is that it takes 1 cpu to process 1 Giga/bit of data.
Right now the system is a 4 cpu system to monitor two 10 GigE connection, it just a starter system. I plan to upgraded it to two 8 cpu system each monitoring one 10 GigE connection later this year.
I don't know how far this configuration will scale.
Bill Jones