top of page

Packet Capture vs Accurate Packet Capture

I just wanted to take a few minutes to share the results of some of the "Capture Limit" testing I have been doing in my lab. These results were shared at Sharkfest Europe 2019 in Estoril, Portugal. The purpose of the session was to discuss the considerations of building your own capture appliance. I am not trying to promote any specific product; rather my goal is to demonstrate the limits where the accuracy of a capture on a laptop becomes questionable.

During my performance testing, I found that there was a huge difference between capturing everything (no packet loss) and capturing everything correctly (packet timing is accurate). Before getting into the results of the testing, let me tell you a bit about my setup.

My line-rate 1Gbps traffic generation box sent traffic to a machine that was benign IP (Protocol ID 99). The connection was tapped twice - one feed sent to my MacBook Pro off a network tap, and the other sent through a ProfiShark device to another capture point. The ProfiShark does the capture on the device itself, while the network tap just forwards the traffic to the capture device where packets are collected and timestamped.

The traffic stream was sent as either small packets (100 bytes), medium sized packets (512 bytes) or large packets (1518 bytes). My traffic generator could only do one packet size per test, so I ran it a few times to see the differences. I gradually reduced the throughput rate until capture point A would (a) keep up with the ingress rate, and (b) accurately timestamp the packets.

Here were the results, with 100,000 packets generated per test:

Let's examine these results.

In the test, 100,000 packets were sent to the target with varying packet sizes and throughput rates. Notice that in capture point A, I was only able to capture all the packets when the rate was turned down to about 250Mbps. Even then, there was a ton of false jitter in the packets. The inter-packet delta times were all over the place, with a maximum value of around 20 milliseconds. This is pretty bad considering that the deltas should have been no higher than a few microseconds.

The second thing to note is that the timestamping was off on Capture Point A until the rate was backed down to about 10Mbps. At this point, the delta times smoothed out and the capture device was able to keep up with the ingress traffic, timestamping it appropriately.

These tests were run both in the Wireshark GUI and on the command line with dumpcap. The results were only slightly better with dumpcap.

All the while, the hardware-backed appliance was able to keep up with line-rate 1Gbps, with correct timestamping.


If I am going to capture a packet stream that is any higher than 10Mbps throughput, it's best to do it with my ProfiShark or another purpose-built appliance. Capturing all the packets is not enough for me - I also need them to be time stamped correctly. Hence the difference between packet capture, and accurate packet capture!

Got questions? Let's get in touch!

bottom of page