On a Debian-based OS (Ubuntu, Debian Squeeze), I'm using Python (2.7, 3.2) fcntl to lock a file. As I understand from what I read, fnctl.flock locks a file in a way, that an exception will be thrown if another client wants to lock the same file.
I built a little example, which I would expect to throw an excepiton, since I first lock the file, and then, immediately after, I try to lock it again:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import fcntl
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX)
try:
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
print("can't immediately write-lock the file ($!), blocking ...")
else:
print("No error")
But the example just prints "No error".
If I split this code up to two clients running at the same time (one locking and then waiting, the other trying to lock after the first lock is already active), I get the same behavior - no effect at all.
Whats the explanation for this behavior?
EDIT :
Changes as requested by nightcracker, this version also prints "No error", although I would not expect that:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import fcntl
import time
fcntl.flock(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
try:
fcntl.flock(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
print("can't immediately write-lock the file ($!), blocking ...")
else:
print("No error")
Old post, but if anyone else finds it, I get this behaviour:
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX)
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
# That didn't throw an exception
>>> f = open('test.flock', 'w')
>>> fcntl.flock(f, fcntl.LOCK_EX)
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 35] Resource temporarily unavailable
>>> f.close()
>>> fcntl.flock(open('test.flock', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
# No exception
It looks like in the first case, the file is closed after the first line, presumably because the file object is inaccessible. Closing the file releases the lock.
I hade the same problem... I've solved it holding the opened file in a separate variable:
Won't work:
fcntl.lockf(open('/tmp/locktest', 'w'), fcntl.LOCK_EX | fcntl.LOCK_NB)
Works:
lockfile = open('/tmp/locktest', 'w')
fcntl.lockf(lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
I think that the first doesnt' works because the opened file is garbage collected , closed and the lock released .
Got it. The error in my script is that I create a new file descriptor on each call:
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)
(...)
fcntl.flock(open('/tmp/locktest', 'r'), fcntl.LOCK_EX | fcntl.LOCK_NB)
Instead, I have to assign the file object to a variable and than try to lock:
f = open('/tmp/locktest', 'r')
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
(...)
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
Than I'm also getting the exception I wanted to see: IOError: [Errno 11] Resource temporarily unavailable
. Now I have to think about in which cases it makes sense at all to use fcntl.
There are two catches. According to the documentation :
When operation is
LOCK_SH
orLOCK_EX
, it can also be bitwise ORed withLOCK_NB
to avoid blocking on lock acquisition. IfLOCK_NB
is used and the lock cannot be acquired, anIOError
will be raised and the exception will have anerrno
attribute set toEACCES
orEAGAIN
(depending on the operating system; for portability, check for both values).
You forgot to set LOCK_NB
.
On at least some systems,
LOCK_EX
can only be used if the file descriptor refers to a file opened for writing.
You have a file opened for reading, which might not support LOCK_EX
on your system.
Try:
global f
f = open('/tmp/locktest', 'r')
When the file is closed the lock will vanish.
you could refer to this post for more details of different lockin schemes.
As for your second question, use fcntl
to get lock across different process(use lockf
instead for simplicity). On linux lockf
is just a wrapper for fcntl
, both are associated with (pid, inode)
pair.
1. use fcntl.fcntl
to provide file lock across processes.
import os
import sys
import time
import fcntl
import struct
fd = open('/etc/mtab', 'r')
ppid = os.getpid()
print('parent pid: %d' % ppid)
lockdata = struct.pack('hhllh', fcntl.F_RDLCK, 0, 0, 0, ppid)
res = fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
print('put read lock in parent process: %s' % str(struct.unpack('hhllh', res)))
if os.fork():
os.wait()
lockdata = struct.pack('hhllh', fcntl.F_UNLCK, 0, 0, 0, ppid)
res = fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
print('release lock: %s' % str(struct.unpack('hhllh', res)))
else:
cpid = os.getpid()
print('child pid: %d' % cpid)
lockdata = struct.pack('hhllh', fcntl.F_WRLCK, 0, 0, 0, cpid)
try:
fcntl.fcntl(fd.fileno(), fcntl.F_SETLK, lockdata)
except OSError:
res = fcntl.fcntl(fd.fileno(), fcntl.F_GETLK, lockdata)
print('fail to get lock: %s' % str(struct.unpack('hhllh', res)))
else:
print('succeeded in getting lock')
2. use fcntl.lockf
.
import os
import time
import fcntl
fd = open('/etc/mtab', 'w')
fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
if os.fork():
os.wait()
fcntl.lockf(fd, fcntl.LOCK_UN)
else:
try:
fcntl.lockf(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError as e:
print('failed to get lock')
else:
print('succeeded in getting lock')
You need to pass in the file descriptor (obtainable by calling the fileno() method of the file object). The code below throws an IOError when the same code is run in a separate interpreter.
>>> import fcntl
>>> thefile = open('/tmp/testfile')
>>> fd = thefile.fileno()
>>> fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.