[英]Huge packet loss when using PACKET_RX_RING of PACKET_MMAP
While capturing ethernet packets using PACKET_MMAP (PACKET_RX_RING) I have more than 50% packet loss at data rate of 100KB/s and higher. 当使用PACKET_MMAP(PACKET_RX_RING)捕获以太网数据包时,在100KB / s或更高的数据速率下,我有超过50%的数据包丢失。 Is it common with this kind of technology?
这种技术常见吗?
Is there any chance or room for improvement in code/parameters/logic to reduce the packet loss when using PACKET_MMAP with PACKET_RX_RING 将PACKET_MMAP与PACKET_RX_RING一起使用时,在代码/参数/逻辑方面是否有任何改善的机会或空间,以减少数据包丢失
#include <stdio.h>
#include <stdio.h>
#include <sys/socket.h>
#include <net/if.h>
#include <linux/if_ether.h>
#include <linux/if_packet.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdint.h>
#include <sys/types.h>
#include <arpa/inet.h>
#include <sys/mman.h>
#include <poll.h>
#include <signal.h>
void handle_frame(struct tpacket_hdr* tphdr, struct sockaddr_ll* addr, char* l2content, char * l3content){
if(tphdr->tp_status & TP_STATUS_USER){
fwrite(l2content,tphdr->tp_snaplen,1,pcapfile);
tphdr->tp_status = TP_STATUS_KERNEL;
}
}
int main(){
file1 = fopen("file1.cap","a+");
int fd = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (fd == -1) {
perror("socket");
exit(1);
}
struct tpacket_req req = {0};
req.tp_frame_size = TPACKET_ALIGN(TPACKET_HDRLEN + ETH_HLEN) + TPACKET_ALIGN(1500);
req.tp_block_size = sysconf(_SC_PAGESIZE);
while (req.tp_block_size < req.tp_frame_size) {
req.tp_block_size <<= 1;
}
req.tp_block_nr = 4;
size_t frames_per_buffer = req.tp_block_size / req.tp_frame_size;
req.tp_frame_nr = req.tp_block_nr * frames_per_buffer;
int version = TPACKET_V1;
(setsockopt(fd, SOL_PACKET, PACKET_VERSION, &version, sizeof(version));
setsockopt(fd, SOL_PACKET, PACKET_RX_RING, (void*)&req, sizeof(req));
size_t rx_ring_size = req.tp_block_nr * req.tp_block_size;
char* rx_ring = mmap(0, rx_ring_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
struct pollfd fds[1] = {0};
fds[0].fd = fd;
fds[0].events = POLLIN;
size_t frame_idx = 0;
char* frame_ptr = rx_ring;
while (1) {
struct tpacket_hdr* tphdr = (struct tpacket_hdr*)frame_ptr;
while (!(tphdr->tp_status & TP_STATUS_USER)) {
if (poll(fds, 1, -1) == -1) {
perror("poll");
exit(1);
}
}
struct sockaddr_ll* addr = (struct sockaddr_ll*)(frame_ptr + TPACKET_HDRLEN - sizeof(struct sockaddr_ll));
char* l2content = frame_ptr + tphdr->tp_mac;
char* l3content = frame_ptr + tphdr->tp_net;
handle_frame(tphdr, addr, l2content, l3content);
frame_idx = (frame_idx + 1) % req.tp_frame_nr;
int buffer_idx = frame_idx / frames_per_buffer;
char* buffer_ptr = rx_ring + buffer_idx * req.tp_block_size;
int frame_idx_diff = frame_idx % frames_per_buffer;
frame_ptr = buffer_ptr + frame_idx_diff * req.tp_frame_size;
}
fflush(pcapfile);
fclose(pcapfile);
}
msgboardpana, msgboardpana,
Check you RX ring settings: tp_block_size = page size
(it equal to several ethernet frames with standard MTU = 1512 byte, for me with 2Kb page it one frame) tp_block_nr = 4
- frame num 4 - pay attention that it physically non-contiguous space! 检查您的RX环设置:
tp_block_size = page size
(它等于几个以太网帧,标准MTU = 1512字节,对于我来说2Kb页面一帧) tp_block_nr = 4
帧数4-注意它在物理上是不连续的空间! I think that you buffer ring just overflowed. 我认为您的缓冲环刚刚溢出。 I really recommend you increase
tp_block_size
( i use next, my page is 2Kb:) 我真的建议您增加
tp_block_size
(我接下来使用,我的页面是2Kb :)
tp.tp_block_size = BLOCK_SIZE; //(PAGE_2K * PAGE_2K)
tp.tp_block_nr = BLOCK_NR; //BLOCK_NR (1)
tp.tp_frame_size = PAGE_2K; /* Max size eth frame is 1522 bytes */
tp.tp_frame_nr = (tp.tp_block_size * tp.tp_block_nr) / tp.tp_frame_size;
And decrease blocks number. 并减少块数。
And additionally, try to decrease syscalls' in your loop cycle - I would recommend to you write to file in separately thread, because of it really heavy syscall(just in case you may check timings), also, I advise you to enable promisc mode on eth - add to init code: 另外,尝试减少循环周期中的系统调用-我建议您在单独的线程中写入文件,因为它确实很繁重的系统调用(以防万一您可以检查时间),此外,我建议您启用混杂模式在eth上-添加到init代码:
struct packet_mreq mreq = {0};
mreq.mr_ifindex = if_idx.ifr_ifindex;
mreq.mr_type = PACKET_MR_PROMISC;
if (setsockopt(sockfd, SOL_PACKET, PACKET_ADD_MEMBERSHIP, &mreq, sizeof(mreq)) == -1)
{
perror("setsockopt");
goto closefd;
}
and if you will decide to separate threads for file write and capturing, sched_fifo policy for capture thread: 并且如果您决定将线程分开以进行文件写入和捕获,则为捕获线程使用sched_fifo策略:
ret = pthread_attr_setschedpolicy(&tattr, SCHED_FIFO);
Regards, Bulat 问候,布拉特
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.