简体   繁体   中英

multiprocessing.Pool.imap_unordered with fixed queue size or buffer?

I am reading data from large CSV files, processing it, and loading it into a SQLite database. Profiling suggests 80% of my time is spent on I/O and 20% is processing input to prepare it for DB insertion. I sped up the processing step with multiprocessing.Pool so that the I/O code is never waiting for the next record. But, this caused serious memory problems because the I/O step could not keep up with the workers.

The following toy example illustrates my problem:

#!/usr/bin/env python  # 3.4.3
import time
from multiprocessing import Pool

def records(num=100):
    """Simulate generator getting data from large CSV files."""
    for i in range(num):
        print('Reading record {0}'.format(i))
        time.sleep(0.05)  # getting raw data is fast
        yield i

def process(rec):
    """Simulate processing of raw text into dicts."""
    print('Processing {0}'.format(rec))
    time.sleep(0.1)  # processing takes a little time
    return rec

def writer(records):
    """Simulate saving data to SQLite database."""
    for r in records:
        time.sleep(0.3)  # writing takes the longest
        print('Wrote {0}'.format(r))

if __name__ == "__main__":
    data = records(100)
    with Pool(2) as pool:
        writer(pool.imap_unordered(process, data, chunksize=5))

This code results in a backlog of records that eventually consumes all memory because I cannot persist the data to disk fast enough. Run the code and you'll notice that Pool.imap_unordered will consume all the data when writer is at the 15th record or so. Now imagine the processing step is producing dictionaries from hundreds of millions of rows and you can see why I run out of memory. Amdahl's Law in action perhaps.

What is the fix for this? I think I need some sort of buffer for Pool.imap_unordered that says "once there are x records that need insertion, stop and wait until there are less than x before making more." I should be able to get some speed improvement from preparing the next record while the last one is being saved.

I tried using NuMap from the papy module (which I modified to work with Python 3) to do exactly this, but it wasn't faster. In fact, it was worse than running the program sequentially; NuMap uses two threads plus multiple processes.

Bulk import features of SQLite are probably not suited to my task because the data need substantial processing and normalization.

I have about 85G of compressed text to process. I'm open to other database technologies, but picked SQLite for ease of use and because this is a write-once read-many job in which only 3 or 4 people will use the resulting database after everything is loaded.

As I was working on the same problem, I figured that an effective way to prevent the pool from overloading is to use a semaphore with a generator:

from multiprocessing import Pool, Semaphore

def produce(semaphore, from_file):
    with open(from_file) as reader:
        for line in reader:
            # Reduce Semaphore by 1 or wait if 0
            semaphore.acquire()
            # Now deliver an item to the caller (pool)
            yield line

def process(item):
    result = (first_function(item),
              second_function(item),
              third_function(item))
    return result

def consume(semaphore, result):
    database_con.cur.execute("INSERT INTO ResultTable VALUES (?,?,?)", result)
    # Result is consumed, semaphore may now be increased by 1
    semaphore.release()

def main()
    global database_con
    semaphore_1 = Semaphore(1024)
    with Pool(2) as pool:
        for result in pool.imap_unordered(process, produce(semaphore_1, "workfile.txt"), chunksize=128):
            consume(semaphore_1, result)

See also:

K Hong - Multithreading - Semaphore objects & thread pool

Lecture from Chris Terman - MIT 6.004 L21: Semaphores

Since processing is fast, but writing is slow, it sounds like your problem is I/O-bound. Therefore there might not be much to be gained from using multiprocessing.

However, it is possible to peel off chunks of data , process the chunk, and wait until that data has been written before peeling off another chunk:

import itertools as IT
if __name__ == "__main__":
    data = records(100)
    with Pool(2) as pool:
        chunksize = ...
        for chunk in iter(lambda: list(IT.islice(data, chunksize)), []):
            writer(pool.imap_unordered(process, chunk, chunksize=5))

It sounds like all you really need is to replace the unbounded queues underneath the Pool with bounded (and blocking) queues. That way, if any side gets ahead of the rest, it'll just block until they're ready.

This would be easy to do by peeking at the source , to subclass or monkeypatch Pool , something like:

class Pool(multiprocessing.pool.Pool):
    def _setup_queues(self):
        self._inqueue = self._ctx.Queue(5)
        self._outqueue = self._ctx.Queue(5)
        self._quick_put = self._inqueue._writer.send
        self._quick_get = self._outqueue._reader.recv
        self._taskqueue = queue.Queue(10)

But that's obviously not portable (even to CPython 3.3, much less to a different Python 3 implementation).

I think you can do it portably in 3.4+ by providing a customized context , but I haven't been able to get that right, so…

A simple workaround might be to use psutil to detect the memory usage in each process and say if more than 90% of memory are taken, than just sleep for a while.

while psutil.virtual_memory().percent > 75:
            time.sleep(1)
            print ("process paused for 1 seconds!")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM