简体   繁体   中英

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.

script1.sh:

rm -f output.txt
echo "some text here" > output.txt
source script2.sh

script2.sh:

./read_time >> output.txt
./run_program
./read_time >> output.txt

Variations on the three lines in script2.sh are repeated.

This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh . But even using source the problem still occurs. The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.

What could be causing this?

Edit: The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.

The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.

Edit 2: I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.

It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >> , somewhere in one of the script2.sh files.

To verify this, set noclobber option with set -o noclobber . The shell will then terminate when trying to write to existing file with > .

Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >> . Or it is used by some command both as input and output which step on each other - look for the file used with < .

Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo .

Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?

It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.

For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.

This is an interesting problem. Please post the solution when you find it!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM