简体   繁体   English

Linux内核模块:套接字缓冲区(sk_buff-> len)非确定性行为

[英]Linux kernel module: Socket buffer (sk_buff->len) non-deterministic behaviour

I have a kernel module that reads packets from netfilter hook and use sk_buff to access the data. 我有一个内核模块,该模块从netfilter挂钩读取数据包,并使用sk_buff访问数据。

What I am observing that when packets are coming at slow rate, sk_buff->len behaves normally but when packets arrive at higher rate (1Gbps etc) then sk_buff->len for few packets starts to increase (always a multiple of 8). 我观察到的是,当数据包以慢速发送时, sk_buff->len正常运行,但是当数据包以更高的速率(1Gbps等)到达时,则少数数据包的sk_buff->len开始增加(总是8的倍数)。

The data I am replaying has fragmented packets as well. 我正在重放的数据也有零散的数据包。 Is it that fragmented packets gets appended in same sk_buff causing an increase in sk_buff->len ? 是否将分段的数据包附加到同一sk_buff从而导致sk_buff->len增大? If yes, how sk_buff is aware of the stack and at what point ? 如果是, sk_buff如何知道堆栈以及在什么时候?

Can someone explain that why it happens and how to get around with that. 有人可以解释为什么会发生这种情况以及如何解决它。 Any reference to some documentation will be helpful as well. 对某些文档的任何引用也会有所帮助。

The reason for this behaviour is GRO (Generic Receive Offload). 出现此现象的原因是GRO(通用接收卸载)。 It's an optimization on the receive side just as TSO or GSO. 与TSO或GSO一样,它是在接收端的一种优化。 It appends packets with same tcp and ip headers in one big skb buffer to make kernel processing easier. 它将具有相同tcp和ip标头的数据包附加到一个大的skb缓冲区中,以使内核处理更加容易。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM