简体   繁体   中英

A third computer captures, modifies and injects packets using libpcap before the dst computer receives packets from src computer

I'm a newbie on libpcap. Right now I am writing C program for capturing, modifying and injecting packets. I have three computers: A, B, C. A is sending ENIP packets to B with interval 10 ms. C is capturing packets sent by A and modifying the packets by: 1) incrementing the seq bit by 1; 2) changing the payload. For example, A is sending packets with seq = 1. C captures that packet, changes its seq bit to 2, changes the payload and injects it to the network. I hope computer B could receive this packet sent by C before receiving the packet with seq = 2 sent by A.

My c program is using pcap_loop to capture packets and pcap_inject to inject the packet. This process takes only several microseconds. However, B could not receive the packets sent by C before receiving packets sent by B. What I observed on computer B is that B receives several packets sent by A (eg with seq = 1,2,3,...30), then B receives several packets sent by C (with seq = 2,3,4,...,31), then packets sent by A (seq = 31,32,...90), then packets sent by C (seq = 32,...91)...

If I change the interval of A to 1 second, this problem doesn't exist...

I am thinking maybe there is some interrupt time for pcap_loop? Maybe pcap_loop captures packets for 0.5 seconds and then sends them to the network in a bunch? I am not sure...

On several OSes, the packet capture mechanism used by libpcap does "batching". Instead of delivering each packet as it arrives, it collects packets until a packet buffer in the kernel fills up or a timer expires, at which point it delivers the entire bufferful of packets, so that there are fewer context switches and fewer system calls.

This means that there will be a delay between the arrival of the packet and its delivery to pcap_loop() ; this is OK for packet capture (especially high-volume packet capture, where the batching can reduce the CPU overhead of capture and thus the chances that packets will be dropped), but not OK for "real-time" applications, where the application wants to see the packet as soon as it arrives.

On newer versions of libpcap, there's a pcap_set_immediate_mode() call; if you use pcap_create() and pcap_activate() instead of pcap_open_live() , and set "immediate mode" by calling pcap_set_immediate_mode() on the pcap_t , with a second argument of 1, between the pcap_create() and pcap_activate() call, no buffering will be done - packets will be delivered as soon as they arrive.

On older versions of libpcap, you could either set the timeout to a very small value (1 millisecond), or, on some platforms, set "immediate mode", on some platforms, in a platform-dependent fashion. (We'd need to know the OS you're using, and the version of libpcap you're using, in order to indicate whether that can be done and how to do it.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM