简体   繁体   中英

How to ensure that AF_PACKET socket doesn't get packets from other interfaces between socket and bind?

Consider an application that opens AF_PACKET socket to listen for packets on a specific interface.

The canonical way to do it is:

  1. Open a socket
  2. Bind it to the interface

The code may be like (error checks omitted for conciseness):

int sock_fd = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));

struct sockaddr_ll sock_addr = {
    .sll_family = AF_PACKET,
    .sll_ifindex = if_nametoindex("ethXYZ"),
};
bind(sock_fd, (struct sockaddr*)&sock_addr, sizeof(sock_addr);

Now, considering that network traffic may be high-rate and/or the executing thread may be preempted between socket and bind syscalls, it could happen that before binding, the socket buffer already received packets, possibly from other interfaces.

This is as defined in packet(7) :

By default, all packets of the specified protocol type are passed to a packet socket. To get packets only from a specific interface use bind(2) specifying an address in a struct sockaddr_ll to bind the packet socket to an interface. Fields used for binding are sll_family (should be AF_PACKET), sll_protocol, and sll_ifindex.

What will be the way to ensure that socket buffer is not populated with unwanted packets between socket and bind?

I've attempted two solutions so far:

  1. After calling bind, set temporary drop-all sock filter, flush socket, disable sock filter
  2. Use (undocummented?) third parameter protocol = 0 in socket() call to drop packets unless bind is called (now, with non-zero protocol).

The simplified code for first solution is:

setsockopt(sock_fd, SOL_SOCKET, SO_ATTACH_FILTER, &fprog, sizeof(fprog));
while (1) {
    bytes = recv(sock_fd, buffer, buffer_size, MSG_DONTWAIT);
    if (bytes == -1) // should check errno
            break;
}
setsockopt(sock_fd, SOL_SOCKET, SO_DETACH_FILTER, &fprog, sizeof(fprog));

The second solution seems most elegant, since bind can provide both ethertype and interface index at the same time, but it seems like undefined behavior. Looking into net/packet/af_packet.c , within packet_create , proto is being used for hooks, but not validated before (ie no error is returned):

if (proto) {
    po->prot_hook.type = proto;
    __register_prot_hook(sk);
} 

After looking up into kernel code of af_packet.c , it seems like the method of creating a socket with 0 protocol should work fine.

While debugging, protocol func packet_rcv is never entered between socket and bind, thus such packets don't increase socket buffer, meaning sk_rmem_alloc is 0. After bind, new packets are entered and fill the buffer as expected, allocating and filling sk_buff :

[  548.455055] Called packet_rcv() pkt_type = 1
[  548.455058] packet_rcv() skb->len = 60
[  548.455059] packet_rcv() orig = 3, dev = 3
[  548.455060] packet_rcv() sk_rmem_alloc = 0

[  554.702140] Called packet_rcv() pkt_type = 1
[  554.702156] packet_rcv() skb->len = 60
[  554.702157] packet_rcv() orig = 3, dev = 3
[  554.702158] packet_rcv() sk_rmem_alloc = 768

The key point is that with proto = 0 , the packet_create() won't call dev_add_pack , thus packet_rcv() protocol handler won't be added into network stack until bind() is called.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM