簡體   English   中英

在 Python 中同時下載文件

[英]Downloading files concurrently in Python

此代碼從存儲庫下載元數據,將該數據寫入文件,下載 pdf,將該 pdf 轉換為文本,然后刪除原始 pdf:

for record in records:
            record_data = []  # data is stored in record_data
            for name, metadata in record.metadata.items():
                for i, value in enumerate(metadata):
                    if value:
                        record_data.append(value)
            fulltext = ''
            file_path = ''
            file_path_metadata = ''
            unique_id = str(uuid.uuid4())
            for data in record_data:
                if 'Fulltext' in data:
                    # the link to the pdf
                    fulltext = data.replace('Fulltext ', '')
                    # path where the txt file will be stored
                    file_path = '/' + os.path.basename(data).replace('.pdf', '') + unique_id + '.pdf'
                    # path where the metadata will be stored
                    file_path_metadata = '/' + os.path.basename(data).replace('.pdf', '') + unique_id + '_metadata.txt'
                    print fulltext, file_path

            # Write metadata to file
            if fulltext:
                try:
                    write_metadata = open(path_to_institute + file_path_metadata, 'w')
                    for i, data in enumerate(record_data):
                        write_metadata.write('MD_' + str(i) + ': ' + data.encode('utf8') + '\n')
                    write_metadata.close()
                except Exception as e:
                    # Exceptions due to missing path to file
                    print 'Exception when writing metadata: {}'.format(e)
                    print fulltext, path_to_institute, file_path_metadata

                # Download pdf
                download_pdf(fulltext, path_to_institute + file_path)

                # Create text file and delete pdf
                pdf2text(path_to_institute + file_path)

做一些測量,download_pdf 方法和 pdf2text 方法需要相當長的時間。

以下是這些方法:

from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfinterp import PDFResourceManager
from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from cStringIO import StringIO
import os


def remove_file(path):
    try:
            os.remove(path)
    except OSError, e:
            print ("Error: %s - %s." % (e.filename,e.strerror))


def pdf2text(path):
    string_handling = StringIO()
    parser = PDFParser(open(path, 'r'))
    save_file = open(path.replace('.pdf', '.txt'), 'w')

    try:
        document = PDFDocument(parser)
    except Exception as e:
        print '{} is not a readable document. Exception {}'.format(path, e)
        return

    if document.is_extractable:
        recourse_manager = PDFResourceManager()
        device = TextConverter(recourse_manager,
                               string_handling,
                               codec='ascii',
                               laparams=LAParams())
        interpreter = PDFPageInterpreter(recourse_manager, device)
        for page in PDFPage.create_pages(document):
            interpreter.process_page(page)

        # write to file
        save_file.write(string_handling.getvalue())
        save_file.close()

        # deletes pdf
        remove_file(path)

    else:
        print(path, "Warning: could not extract text from pdf file.")
        return

def download_pdf(url, path):
        try:
            f = urllib2.urlopen(url)
        except Exception as e:
            print e
            f = None

        if f:
            data = f.read()
            with open(path, "wb") as code:
                code.write(data)
                code.close()

所以我想我應該並行運行它們。 我試過這個,但它沒有說:

    pool = mp.Pool(processes=len(process_data))
    for i in process_data:
        print i
        pool.apply(download_pdf, args=(i[0], i[1]))

    pool = mp.Pool(processes=len(process_data))
    for i in process_data:
        print i[1]
        pool.apply(pdf2text, args=(i[1],))

需要一樣長的時間嗎? 打印就像一次運行一個進程一樣......

是一篇關於如何並行構建東西的好文章,

它使用multiprocessing.dummy在不同的線程中運行東西

這是一個小例子:

from urllib2 import urlopen
from multiprocessing.dummy import Pool

urls = [url_a,
        url_b,
        url_c
       ]

pool = Pool()
res = pool.map(urlopen, urls)

pool.close()
pool.join()

對於 python >= 3.3 我建議concurrent.futures

例子:

import functools
import urllib.request
import futures

URLS = ['http://www.foxnews.com/',
    'http://www.cnn.com/',
    'http://europe.wsj.com/',
    'http://www.bbc.co.uk/',
    'http://some-made-up-domain.com/']

def load_url(url, timeout):
    return urllib.request.urlopen(url, timeout=timeout).read()

with futures.ThreadPoolExecutor(50) as executor:
    future_list = executor.run_to_futures(
       [functools.partial(load_url, url, 30) for url in URLS])

示例取自: here

我終於找到了一種並行運行代碼的方法。 令人難以置信的是它變得如此之快。

    import multiprocessing as mp

    jobs = []
    for i in process_data:
        p = mp.Process(target=download_pdf, args=(i[0], i[1]))
        jobs.append(p)
        p.start()

    for i, data in enumerate(process_data):
        print data
        p = mp.Process(target=pdf2text, args=(data[1],))
        jobs[i].join()
        p.start()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM