简体   繁体   中英

How do I create a file descriptor in linux that can be read from multiple processes without consuming the data?

I'd like to create a file descriptor that when written to can be read from multiple processes without consuming the data. I'm aware of named pipes, but since it's a fifo, only one processes can ever get the data.

My use case is the following. With git, hooks use stdin to pass data to be processed into the hook. I want to be able to call multiple sub-hooks from a parent hook. Each sub-hook should get the same stdin data as the parent receives. If I'm not mistaken when I use a pipe, then each subprocess will not get the same stdin. Instead, the first hook to read stdin would consume the data. Is this correct?

The only option I really see being viable at this point is writing stdin to a file then reading that file from each subprocess. Is there another way?

Perhaps you could try to use tee.

From man tee:

tee - read from standard input and write to standard output and files

In your case with processes in this way: tee >(parent) >(hook1) >(hook2) >(hookn)

(where each hook is a different process, command, shell whatever you want)

Here an example:

#!/bin/bash

while read stdinstream
do
    echo -n ${stdinstream} | tee >(parent) >(hook2) >(hook1)
done

EDIT:

In your case I do not think you are going to need the while loop, with this could be enough:

read stdinstream
echo -n ${stdinstream} | tee >(parent) >(hook2) >(hook1)

Hopefully this will help you.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM