简体   繁体   中英

downloading large number of files using python

test.txt contains the list of files to be downloaded:

http://example.com/example/afaf1.tif
http://example.com/example/afaf2.tif
http://example.com/example/afaf3.tif
http://example.com/example/afaf4.tif
http://example.com/example/afaf5.tif

How these files can be downloaded using python with maximum download speed?

my thinking was as follows:

import urllib.request
with open ('test.txt', 'r') as f:
    lines = f.read().splitlines()
    for line in lines:
        response = urllib.request.urlopen(line)

What after that?How to select download directory?

Select a path to your desired output directory ( output_dir ). In your for loop split every url on / character and use the last peace as the filename. Also open the files for writing in binary mode wb since the response.read() returns bytes , not str .

import os
import urllib.request

output_dir = 'path/to/you/output/dir'

with open ('test.txt', 'r') as f:
    lines = f.read().splitlines()
    for line in lines:
        response = urllib.request.urlopen(line)
        output_file = os.path.join(output_dir, line.split('/')[-1])
        with open(output_file, 'wb') as writer:
            writer.write(response.read())

Note:

Downloading multiple files can be faster if you use multiple threads since the download is rarely using the full bandwidth of your internet connection._

Also if the files you are downloading are pretty big you should probably stream the read (reading chunk by chunk). As @Tiran commented you should use shutil.copyfileobj(response, writer) instead of writer.write(response.read()) .

I would only add that you should probably always specify the length parameter too: shutil.copyfileobj(response, writer, 5*1024*1024) # (at least 5MB) since the default value of 16kb is really small and it will just slow things down.

This works fine for me: (note that name must be absolute, for example 'afaf1.tif')

import urllib,os
def download(baseUrl,fileName,layer=0):
    print 'Trying to download file:',fileName
    url = baseUrl+fileName
    name = os.path.join('foldertodwonload',fileName)
    try:
        #Note that folder needs to exist
        urllib.urlretrieve (url,name)
    except:
        # Upon failure to download retries total 5 times
        print 'Download failed'
        print 'Could not download file:',fileName
        if layer > 4:
            return
        else:
            layer+=1
        print 'retrying',str(layer)+'/5'
        download(baseUrl,fileName,layer)
    print fileName+' downloaded'

for fileName in nameList:
    download(url,fileName)

Moved unnecessary code out from try block

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM