[英]Using libpcap, is there a way to determine the file offset of a captured packet from an offline pcap file?
I'm writing a program to reconstruct TCP streams captured by Snort. 我正在编写一个程序来重建Snort捕获的TCP流。 Most of the examples I've read regarding session reconstruction either:
我阅读的有关会话重建的大多数示例都可以:
My current solution was to write my own pcap file parser since the format is simple. 我当前的解决方案是编写自己的pcap文件解析器,因为格式很简单。 I save the offsets of each packet in a vector and can reload each one after I've passed it.
我将每个数据包的偏移量保存在一个矢量中,并在通过它后可以重新加载每个数据包。 This, like libpcap, only streams one packet in to memory at a time;
就像libpcap一样,一次只能将一个数据包流传输到内存中。 I am only using sequence numbers and flags for ordering, NOT the packet data.
我仅使用顺序号和标志进行排序,而不使用数据包数据。 Unlike libpcap, it is noticeably slower.
与libpcap不同,它明显较慢。 processing a 570 MB capture with libpcap takes roughly 0.9 seconds whereas my code takes 3.2 seconds.
使用libpcap处理570 MB捕获大约需要0.9秒,而我的代码需要3.2秒。 However, I have the advantage of being able to seek backwards without reloading the entire capture.
但是,我的优势是可以向后搜索而无需重新加载整个捕获。
If I were to stick with libpcap for speed issues, I was thinking I could just make a currentOffset
variable with an initial value of 24 (the size of the pcap file global header), push it to a vector every time I load a new packet, and increment it every time I call pcap_next_ex
by the size of the packet + 16 (for the size of the pcap record header). 如果我坚持使用libpcap来解决速度问题,我想我可以将
currentOffset
变量的初始值设置为24(pcap文件全局头文件的大小),并在每次加载新数据包时将其推送到向量,并在每次调用pcap_next_ex
将其增加数据包的大小+ 16(用于pcap记录头的大小)。 Then, whenever I wanted to read an individual packet, I could load it using conventional means and seek to packetOffsets[packetNumber]
. 然后,每当我想读取单个数据包时,都可以使用常规方法加载它,并寻求
packetOffsets[packetNumber]
。
Is there a better way to do this using libpcap? 有没有更好的办法使用libpcap做到这一点?
Solved the problem myself. 我自己解决了这个问题。
Before I call pcap_next_ex
, I push ftell(pcap_file(myPcap))
in to a vector<unsigned long>
. 在调用
pcap_next_ex
之前,我将ftell(pcap_file(myPcap))
推入vector<unsigned long>
。 I manually parse the packets after that as needed. 之后,我会根据需要手动解析数据包。
EZPZ. EZPZ。 It just took 24+ hours of brain wrack...
仅仅花了24个小时以上的脑残...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.