简体   繁体   English

在linux中通过命名管道发送数据块

[英]Sending data chunks over named pipe in linux

I want to send data blocks over named pipe and want receiver to know where data blocks end. 我想通过命名管道发送数据块,并希望接收器知道数据块的结束位置。 How should I do it with named pipes? 我应该如何使用命名管道? Should I use some format for joining and splitting blocks (treat pipe always as stream of bytes) or are there any other method? 我应该使用某种格式来连接和拆分块(将管道总是作为字节流处理)还是有其他方法?

I've tried opening and closing pipe at sender for every data block but data becomes concatenated at receiver side (EOF not send): 我已尝试在发送器处为每个数据块打开和关闭管道,但数据在接收器端连接(EOF不发送):

for _ in range(2):
     with open('myfifo', 'bw') as f:
         f.write(b'+')

Result: 结果:

rsk@fe temp $ cat myfifo 
++rsk@fe temp $

You can either use some sort of delimiter or a frame structure over your pipes, or (preferably) use multiprocessing.Pipe like objects and run Pickled Python objects through them. 您可以在管道上使用某种分隔符或框架结构,或者(最好)使用multiprocessing.Pipe像对象一样管道并通过它们运行Pickled Python对象。

The first option is simply defining a simple protocol you will be running through your pipe. 第一个选项是简单地定义一个将通过管道运行的简单协议。 Add a header to each chunk of data you send so that you know what to do with it. 为您发送的每个数据块添加标题,以便您知道如何处理它。 For instance, use a length-value system: 例如,使用长度值系统:

import struct

def send_data(file_descriptor, data):
    length = struct.pack('>L', len(data))
    packet = "%s%s" % (length, data)
    file_descriptor.write(packet)

def read_data(file_descriptor):
    binary_length = file_descriptor.read(4)
    length = struct.unpack('>L', binary_length)[0]

    data = ''
    while len(data) < length:
        data += file_descriptor.read(length - len(data))

As for the other option - You can try reading the code of the multiprocessing module, but essentially, you just run through the pipe the result of cPickle.dumps and then you read it into cPickle.loads to get Python objects. 至于另一个选项 - 您可以尝试读取multiprocessing模块的代码,但实际上,您只需通过管道运行cPickle.dumps的结果,然后将其读入cPickle.loads以获取Python对象。

I would just use lines of JSON ecoded data. 我只想使用JSON ecoded数据行。 These are easy to debug and the performance is reasonable. 这些易于调试,性能合理。

For an example on reading and writing lines: http://www.tutorialspoint.com/python/file_writelines.htm 有关读写行的示例: http//www.tutorialspoint.com/python/file_writelines.htm

For an example of using ujson (UltraJSON): https://pypi.python.org/pypi/ujson 有关使用ujson(UltraJSON)的示例: https ://pypi.python.org/pypi/ujson

In addition to other solutions, you don't need to stick on named pipes. 除了其他解决方案,您不需要坚持命名管道。 Named sockets aren't worse and provide more handy features. 命名套接字并不差,并提供更方便的功能。 With AF_LOCAL and SOCK_SEQPACKET, message boundaries transport is maintained by the kernel, so what is written by a single send() will be got on the opposite side with a single recv(). 使用AF_LOCAL和SOCK_SEQPACKET,消息边界传输由内核维护,因此单个send()所写的内容将通过单个recv()获得。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM