简体   繁体   English

需要深入说明如何在Linux Shell脚本中使用flock

[英]Need an in-depth explanation of how to use flock in Linux shell scripting

I am working on a tiny Raspberry Pi cluster (4 pis). 我正在研究一个小的Raspberry Pi集群(4个pis)。 I have 3 Raspberry Pi nodes that will be leaving a message in a message.txt file on the head Pi. 我有3个Raspberry Pi节点,它们将在头Pi的message.txt文件中留下一条消息。 The head Pi will be in a loop checking the message.txt file to see if it has any lines. 头Pi将循环检查message.txt文件以查看是否有任何行。 When it does I want to lock the file and then extract the info I need. 这样做时,我想锁定文件,然后提取所需的信息。 The problem I am having is that I need to do multiple commands. 我遇到的问题是我需要执行多个命令。 The only ways I have found that allows multiple commands look like this... 我发现允许多个命令的唯一方法如下所示:

(
flock -s 200

# ... commands executed under lock ...

) 200>/var/lock/mylockfile 

The problem with this way is that it uses a sub shell. 这种方法的问题在于它使用了一个子外壳。 The problem with that is that I have "job" files labeled job_1 job_2 etc..... that I want to be able to use a counter with. 问题在于我有一个标签为job_1 job_2 etc .....的“ job”文件,希望可以与它一起使用。 If I place the increment of the counter in the subshell it will be considered only in the scope of the subshell. 如果我将计数器的增量放在子外壳中,则只会在子外壳的范围内考虑。 If I pull the incrementation out there is a chance that another pi will add an entry before I increment the counter and lock the file. 如果我拉出增量,则在增加计数器并锁定文件之前,另一个pi可能会添加一个条目。

I have heard talk that there is a way to lock the file and run multiple commands and flow control and then unlock it all using flock . 我听说有一种方法可以锁定文件并运行多个命令和流控制,然后使用flock将其全部解锁。 I have not seen any good examples though. 我还没有看到任何好的例子。

Here is my current code. 这是我当前的代码。

# Now go into loop to send out jobs as pis ask for more work
while [ $jobsLeftCount -gt 0 ]
do
echo "launchJobs.sh: About to check msg file"

msgLines=$(wc -l < $msgLocation)
if [ $msgLines ]; then
#FIND WAY TO LOCK FILE AND DO THAT HERE
echo "launchJobs.sh: Messages found. Locking message file to read contents"

(
flock -e 350
echo "Message Received"

while read line; do  
#rename file to be sent to node "job"
mv $jobLocation$jobName$jobsLeftCount /home/pi/algo2/Jobs/job
#transfer new job to each script that left a message
scp /home/pi/algo2/Jobs/job pi@192.168.0.$line:/home/pi/algo2/Jobs/
jobsLeftCount=$jobsLeftCount-1;
echo $line 
done < $msgLocation


#clear msg file
>$msgLocation
#UNLOCK MESG FILE HERE
) 350>>$msgLocation


echo "Head node has $jobsLeftCount remaining"


fi
#jobsLeftCount=$jobsLeftCount-1;
#echo "here is $jobsLeftCount file"
done

If the sub-shell environment is not acceptable, use braces in place of parentheses to group the commands: 如果子外壳环境不可接受,请使用括号代替括号来对命令进行分组

{
flock -s 200

# ... commands executed under lock ...

} 200>/var/lock/mylockfile

This runs the commands executed under lock in a new I/O context, but does not start a sub-shell. 这将运行在一个新的I / O环境下锁定执行的命令,但启动一个子shell。 Within the braces, all the commands executed will have file descriptor 200 open to the locked lock file. 在大括号内,所有执行的命令将向锁定的锁定文件打开文件描述符200。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM