简体   繁体   中英

Permission Denied sometimes when downloading executable file

So I'm writing a script to download a file and restart the download from where it left off if it doesn't complete. That part's taken care of.

I took out all of those parts out to show just the part that doesn't work. The amazon url here I just made up, so it won't actually download anything if you run this, but replacing the url with an actual download link, would:

import urllib
import time
import os

file_name = "setup.exe"
web_page = urllib.FancyURLopener().open("https://s3.amazonaws.com/some_bucket/files/"+file_name)
while True:
    data = web_page.read(8192)
    if not data:
        print "done"
        break
    #print os.getcwd()
    with open(file_name, "ab") as outputFile:
        outputFile.write(data)
    #print "going..."
    #time.sleep(1)

What happens is (and this is ONLY when trying to download EXE files), The process will read from web_page a seemingly random number of times (between 1 and 20ish), then throws an IOError: 13, Permission denied. Again, with a .gif or .mov, or a few other things I've tested, the permission denied error is never thrown.

Additionally, uncommenting the time.sleep(1) line resolves the issue. It's as though the with statement doesn't fully close the file before continuing.

I thought the with statement was supposed to handle the close, no?

I also thought that perhaps SOMEHOW my current directory was being changed, but uncommenting that never reveals it (though by the same logic it wouldn't necessarily have to).

(What's also weird is that if I run this script from the desktop [so that it also writes to the desktop] with Aptana opened in front of it, the permission denied error won't occur, yet the second I minimize the text editor to focus the desktop, the error is thrown -- I attribute this to Aptana taking up a lot of resources while it's opened, therefore slowing down the other process and being kind of like a time.sleep??)

Thanks very much for any pointers.

I don't understand why you're going to the trouble to re-open and close the file for each network read. As Pavel suggests, this might give a virus scanner a chance to open (and lock?) the file to scan it. Why not just open it once, do all your I/O, then close it? (I suppose it may have something to do with the code you omitted.)

Instead of:

while True:
    data = web_page.read(8192)
    if not data:
        print "done"
        break
    with open(file_name, "ab") as outputFile:
        outputFile.write(data)

Try:

with open(file_name, "ab") as outputFile:
    while True:
        data = web_page.read(8192)
        if not data:
            print "done"
            break
        outputFile.write(data)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM