简体   繁体   中英

os.path.exists() lies

I'm running a number of python scripts on a linux cluster, and the output from one job is generally the input to another script, potentially run on another node. I find that there is some not insignificant lag before python notices files that have been created on other nodes -- os.path.exists() returns false and open() fails as well. I can do a while not os.path.exists(mypath) loop until the file appears, and it can take upwards of a full minute, which is not optimal in a pipeline with many steps and potentially running many datasets in parallel.

The only workaround I've found so far is to call subprocess.Popen("ls %s"%(pathdir), shell=True), which magically fixes the problem. I figure this is probably a system problem, but any way python might be causing this? Some sort of cache or something? My sysadmin hasn't been much help so far.

os.path.exists() just calls the C library's stat() function.

I believe you're running into a cache in the kernel's NFS implementation. Below is a link to a page that describes the problem as well as some methods to flush the cache.

File Handle Caching

Directories cache file names to file handles mapping. The most common problems with this are:

•You have an opened file, and you need to check if the file has been replaced by a newer file. You have to flush the parent directory's file handle cache before stat() returns the new file's information and not the opened file's.

◦Actually this case has another problem: The old file may have been deleted and replaced by a new file, but both of the files may have the same inode. You can check this case by flushing the open file's attribute cache and then seeing if fstat() fails with ESTALE.

•You need to check if a file exists. For example a lock file. Kernel may have cached that the file does not exist, even if in reality it does. You have to flush the parent directory's negative file handle cache to to see if the file really exists.

A few ways to flush the file handle cache:

•If the parent directory's mtime changed, the file handle cache gets flushed by flushing its attribute cache. This should work quite well if the NFS server supports nanosecond or microsecond resolution.

•Linux: chown() the directory to its current owner. The file handle cache is flushed if the call returns successfully.

•Solaris 9, 10: The only way is to try to rmdir() the parent directory. ENOTEMPTY means the cache is flushed. Trying to rmdir() the current directory fails with EINVAL and doesn't flush the cache.

•FreeBSD 6.2: The only way is to try to rmdir() either the parent directory or the file under it. ENOTEMPTY, ENOTDIR and EACCES failures mean the cache is flushed, but ENOENT did not flush it. FreeBSD does not cache negative entries, so they do not have to be flushed.

http://web.archive.org/web/20100912144722/http://www.unixcoding.org/NFSCoding

The problem is related to the fact that the Python process runs in its own shell. When you run subprocess.Popen(shell=True) you are spawning a new shell, which is working around the issue you're experiencing.

Python is not causing this issue. It's a combination of how NFS (file storage) and directory listings function in Linux.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM