简体   繁体   中英

File locking seems not to work (flock/lockf)

In my project we have some scripts that start the application, do some performance tests and then kill the application. The problem is that sometimes something bad happens to the script, like a crash. Then our application hangs "in the air".

I wanted to fix that by writing pid value to file that contains pid/pids of the application, but to do it properly (I think) I wanted to do something like this:

lock the file
process the pid/pids
clean file entries
unlock the file

I then searched how to lock the files in Python 2.7 (because we are using it for writing our scripts), so I found out about https://docs.python.org/2/library/fcntl.html and flock and lockf methods, but I am doing something wrong I think.

I wanted to test if those methods work properly so I did:

echo "test" > testFile
(open repl)
>>> import fcntl
>>> f = open("testFile", "rw")
>>> fcntl.flock(f, fcntl.LOCK_EX)

and even if I locked the file (or at least I think I did) I could do

echo "aaa" >> testFile

in other terminal session and it succeeded, the file was changed, no errors.

If there is a os specific trick I should use (but I doubt that python standard library can't handle locking in portable way) this needs to work on Linux.

By default file locks are advisory, meaning they only work when all processes cooperate ie they check to see if the file is locked before attempting I/O. There is nothing stopping a process from ignoring an advisory lock and just writing to the file.

There are also mandatory locks through which the system coerces other processes to respect the locks. This is probably what you want and should google "mandatory locks linux" for the details, which mostly involve mounting the file systems in question with some parameters.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM