简体   繁体   中英

fcntl, lockf, which is better to use for file locking?

Looking for information regarding the advantages and disadvantages of both fcntl and lockf for file locking. For example which is better to use for portability? I am currently coding a linux daemon and wondering which is better suited to use for enforcing mutual exclusion.

What is the difference between lockf and fcntl:

On many systems, the lockf() library routine is just a wrapper around fcntl() . That is to say lockf offers a subset of the functionality that fcntl does.

Source

But on some systems, fcntl and lockf locks are completely independent.

Source

Since it is implementation dependent, make sure to always use the same convention. So either always use lockf from both your processes or always use fcntl. There is a good chance that they will be interchangeable, but it's safer to use the same one.

Which one you chose doesn't matter.


Some notes on mandatory vs advisory locks:

Locking in unix/linux is by default advisory , meaning other processes don't need to follow the locking rules that are set. So it doesn't matter which way you lock, as long as your co-operating processes also use the same convention.

Linux does support mandatory locking, but only if your file system is mounted with the option on and the file special attributes set. You can use mount -o mand to mount the file system and set the file attributes gx,g+s to enable mandatory locks, then use fcntl or lockf . For more information on how mandatory locks work see here .

Note that locks are applied not to the individual file, but to the inode. This means that 2 filenames that point to the same file data will share the same lock status.

In Windows on the other hand, you can actively exclusively open a file, and that will block other processes from opening it completely. Even if they want to. Ie, the locks are mandatory. The same goes for Windows and file locks. Any process with an open file handle with appropriate access can lock a portion of the file and no other process will be able to access that portion.


How mandatory locks work in Linux:

Concerning mandatory locks, if a process locks a region of a file with a read lock, then other processes are permitted to read but not write to that region. If a process locks a region of a file with a write lock, then other processes are not permitted to read nor write to the file. What happens when a process is not permitted to access the part of the file depends on if you specified O_NONBLOCK or not. If blocking is set it will wait to perform the operation. If no blocking is set you will get an error code of EAGAIN .


NFS warning:

Be careful if you are using locking commands on an NFS mount. The behavior is undefined and the implementation widely varies whether to use a local lock only or to support remote locking.

Both interfaces are part of the POSIX standard, and nowadays both interfaces are available on most systems (I just checked Linux, FreeBSD, Mac OS X, and Solaris). Therefore, choose the one that fits better your requirements and use it.

One word of caution: it is unspecified what happens when one process locks a file using fcntl and another using lockf. In most systems these are equivalent operations (in fact under Linux lockf is implemented on top of fcntl), but POSIX says their interaction is unspecified. So, if you are interoperating with another process that uses one of the two interfaces, choose the same one.

Others have written that the locks are only advisory: you are responsible for checking whether a region is locked. Also, don't use stdio functions, if you want the to use the locking functionality.

Your main concerns, in this case (ie when " coding a Linux daemon and wondering which is better suited to use for enforcing mutual exclusion "), should be:

  1. will the locked file be local or can it be on NFS?
    • eg can the user trick you into creating and locking your daemon's pid file on NFS?
  2. how will the lock behave when fork ing, or when the daemon process is terminated with extreme prejudice eg kill -9 ?

The flock and fcntl commands behave differently in both cases.

My recommendation would be to use fcntl . You may refer to the File locking article on Wikipedia for an in-depth discussion of the problems involved with both solutions:

Both flock and fcntl have quirks which occasionally puzzle programmers from other operating systems. Whether flock locks work on network filesystems, such as NFS, is implementation dependent. On BSD systems flock calls are successful no-ops. On Linux prior to 2.6.12 flock calls on NFS files would only act locally. Kernel 2.6.12 and above implement flock calls on NFS files using POSIX byte range locks. These locks will be visible to other NFS clients that implement fcntl()/POSIX locks. 1 Lock upgrades and downgrades release the old lock before applying the new lock. If an application downgrades an exclusive lock to a shared lock while another application is blocked waiting for an exclusive lock, the latter application will get the exclusive lock and the first application will be locked out. All fcntl locks associated with a file for a given process are removed when any file descriptor for that file is closed by that process, even if a lock was never requested for that file descriptor. Also, fcntl locks are not inherited by a child process. The fcntl close semantics are particularly troublesome for applications which call subroutine libraries that may access files.

I came across an issue while using fcntl and flock recently that I felt I should report here as searching for either term shows this page near the top on both.

Be advised BSD locks, as mentioned above, are advisory . For those who do not know OSX (darwin) is BSD. This must be remembered when opening a file to write into.

To use fcntl/flock you must first open the file and get its ID. However if you have opened the file with "w" the file will instantly be zeroed out . If your process then fails to get the lock as the file is in use elsewhere, it will most likely return, leaving the file as 0kb. The process which had the lock will now find the file has vanished from underneath it, catastrophic results normally follow.

To remedy this situation, when using file locking, never open the file "w", but instead open it "a", to append. Then if the lock is successfully acquired, you can then safely clear the file as "w" would have, ie. :

fseek(fileHandle, 0, SEEK_SET);//move to the start

ftruncate(fileno((FILE *) fileHandle), 0);//clear it out

This was an unpleasant lesson for me.

As you're only coding a daemon which uses it for mutual exclusion, they are equivalent, after all, your application only needs to be compatible with itself.

The trick with the file locking mechanisms is to be consistent - use one and stick to it. Varying them is a bad idea.

I am assuming here that the filesystem will be a local one - if it isn't, then all bets are off, NFS / other network filesystems handle locking with varying degrees of effectiveness (in some cases none)

The following page does a good job at summarizing advantages and disadvantages of all kinds of file locks available on Linux. IMHO, it constitutes the most comprehensive answer to your question. https://gavv.github.io/articles/file-locks/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM