简体   繁体   中英

Pipe() with fork() with recursion: File Descriptors handling

I have confusion regarding an existing question that was asked yesterday:
Recursive piping in Unix again .

I am re-posting the problematic code:

#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <stdlib.h>

void pipeline( char * ar[], int pos, int in_fd);
void error_exit(const char*);
static int child = 0; /* whether it is a child process relative to main() */

int main(int argc, char * argv[]) {
    if(argc < 2){
        printf("Usage: %s option (option) ...\n", argv[0]);
        exit(1);
    }
    pipeline(argv, 1, STDIN_FILENO);
    return 0;
}

void error_exit(const char *kom){
    perror(kom);
    (child ? _exit : exit)(EXIT_FAILURE);
}

void pipeline(char *ar[], int pos, int in_fd){
    if(ar[pos+1] == NULL){ /*last command */
        if(in_fd != STDIN_FILENO){
            if(dup2(in_fd, STDIN_FILENO) != -1)
                close(in_fd); /*successfully redirected*/
            else error_exit("dup2");
        }
        execlp(ar[pos], ar[pos], NULL);
        error_exit("execlp last");
    }
    else{
        int fd[2];
        pid_t childpid;

        if ((pipe(fd) == -1) || ((childpid = fork()) == -1)) {
            error_exit("Failed to setup pipeline");
        }
        if (childpid == 0){ /* child executes current command */
            child = 1;
            close(fd[0]);
            if (dup2(in_fd, STDIN_FILENO) == -1) /*read from in_fd */
                perror("Failed to redirect stdin");
            if (dup2(fd[1], STDOUT_FILENO) == -1)   /*write to fd[1]*/
                perror("Failed to redirect stdout");
            else if ((close(fd[1]) == -1) || (close(in_fd) == - 1))
                perror("Failed to close extra pipe descriptors");
            else {
                execlp(ar[pos], ar[pos], NULL);
                error_exit("Failed to execlp");
            }
        }
        close(fd[1]);   /* parent executes the rest of commands */
        close(in_fd);
        pipeline(ar, pos+1, fd[0]);
    }
}

The error that was occuring was:

Example: 
./prog ls uniq sort head 

gives: 
sort: stat failed: -: Bad file descriptor

The solution that was suggested was, "don't close the file descriptors fd[1] and in_fd in the child process since they are already being closed in the parent process."

My Confusion: (sorry I am a newbie in Linux)
As per my book "Beginning Linux Programming", when we fork() a process, then the file descriptors are also duplicated. Hence the parent and child should have different file descriptors. This contradicts the answer.

My Attempt:
I tried to run this code myself and I saw that the problem comes only if I close the "in_fd" file descriptor in both processes (parent and child). It does not depend on fd[1].
Also, interestingly, if I try ./prog ls sort head it works fine, but when I try ./prog ls sort head uniq it gives a read error on head .

My Thoughts: The in_fd file descriptor is just an input int variable for this function. It seems that even after fork, only one file descriptor remains that is being shared by both parent and child. But I am not able to understand how.

when we fork() a process, then the file descriptors are also duplicated. Hence the parent and child should have different file descriptors

file descriptor is a simple integer . so when it is copied, it has same value so they point to same file.

So you can open a file in parent, and access it from child. Only problem that may occur is if the file is accessed from parent and child both, in that case it is not guaranteed from which position of the file it will access. To avoid this, it is recommended to close the fd in child and reopen.

As you have stated your attempt, I did the same for the said problem and find that this is happening for 4th command always. Also, dup2() closes the file which it is duplicating. In the problem, fd[1] and in_fd was duplicated to the child's stdin and stdout . and fd[1] and in_fd was closed in that moment. There is no need to close them again.

Closing a already closed descriptor will cause error.

And as you do not know whether parent or child is going to execute first, if you close one file from child, and again close from parent, may cause problem, and this type of behavior is unpredictable.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM