well, I'll answer the question myself.
on linux systems, after using tcpdump to grab the package, you will be prompted:
to put it simply, captured is the number of packets obtained after tcpdump processing, that is, the number of packets in the resulting pcap file; received is all packets processed by the filter; and dropped is the number of unprocessed packets. The result of
received by filter depends on the operating system running tcpdump and its configuration. If you specify a filter, packets are evaluated whether or not they are matched by a filter expression, even if they are matched by a filter expression, whether or not they are read and processed by tcpdump, that is, a packet, received by filter is incremented by 1. If the receive buffer of the sock is filled, the packet is discarded and the dropped by kernel is added by 1, so the count of received by filter and dropped by kernel is maintained by the kernel.
the reason for packet loss is that after libcap catches the packet, the upper layer of tcpdump does not take it out in time, resulting in libcap buffer overflow, thus discarding the unprocessed packet, which is shown here as dropped by kernel. The kernel here is not abandoned by the linux kernel, but by the tcpdump kernel, that is, libcap.
there are also some solutions, such as
1,-n parameters, disable reverse domain name resolution ()
2,-s parameters, and control the length of crawling packets
(using a larger capture range not only increases the time to process messages, but also reduces the buffer number of messages, which may lead to the loss of messages. Try to make the snaplen as small as possible, as long as it can accommodate the protocol information you need. )
3. Output the packet to the cap file
4, modify the SO_REVBUF parameter with sysctl, and increase the length of the libcap buffer
.
method 1 I have tried, and the effect is not satisfactory.
method 2 has also been tried, and it works well. But I was supposed to test the performance of the bag, so I had to catch all the bags. I thought about it and gave up this plan.
method 3 this. I originally output to the file, but I still have the problem of losing my packet, so it doesn't seem to be of any use.
method 4 feels a bit complicated, but it is also mentioned in the previous explanation that packet loss is caused by insufficient buffers, so I think this method has a door, but it is a bit troublesome. Then I got an idea, and I found that there is a-B parameter in tcpdump that can change the buffer size, !
so the final solution is: I modified the buffer size of tcpdump with the-B parameter!
note here that if the-B option is not specified, the buffer size defaults to 32768, so I'll try it by second,-B 65535.
hee hee, all of a sudden, everything lost the bag and flew away ~
I use raw. Like tcpdump, 1.3%CPU-0.7%cpu has read blocking and occasional packet loss. Net.core.rmem_max = 16777216 slightly relieved. It will still be lost. If CPU affinity occupies only one has not been tested. Unblocking will account for 100% of CPU, untested
5