繁体   English   中英

Python3中的多处理和死锁问题

[英]Problem with Multiprocessing and Deadlocking in Python3

我的多处理有问题,恐怕这是一个相当简单的修复,我只是没有正确地正确实现多处理。 我一直在研究可能导致问题的事情,但我真正发现的是人们建议使用队列来防止这种情况,但这似乎并没有阻止它(再次,我可能只是在实施队列错误)我已经在这几天了,我希望能得到一些帮助。 提前致谢!

import csv
import multiprocessing as mp
import os
import queue
import sys
import time

import connections
import packages
import profiles


def execute_extract(package, profiles, q):
    # This is the package execution for the extract
    # It fires fine and will print the starting message below
    started_at = time.monotonic()
    print(f"Starting {package.packageName}")
    try:
        oracle_connection = connections.getOracleConnection(profiles['oracle'], 1)
        engine = connections.getSQLConnection(profiles['system'], 1)
        path = os.path.join(os.getcwd(), 'csv_data', package.packageName + '.csv')
        cursor = oracle_connection.cursor()

        if os.path.exists(path):
            os.remove(path)

        f = open(path, 'w')
        chunksize = 100000
        offset = 0
        row_total = 0

        csv_writer = csv.writer(f, delimiter='^', lineterminator='\n')
        # I am having to do some data cleansing.  I know this is not the most efficient way to do this, but currently
        # it is what I am limited too 
        while True:
            cursor.execute(package.query + f'\r\n OFFSET {offset} ROWS\r\n FETCH NEXT {chunksize} ROWS ONLY')
            test = cursor.fetchone()
            if test is None:
                break
            else:
                while True:
                    row = cursor.fetchone()
                    if row is None:
                        break
                    else:
                        new_row = list(row)
                        new_row.append(package.sourceId[0])
                        new_row.append('')
                        i = 0
                        for item in new_row:
                            if type(item) == float:
                                new_row[i] = int(item)
                            elif type(item) == str:
                                new_row[i] = item.encode('ascii', 'replace')
                            i += 1
                        row = tuple(new_row)
                        csv_writer.writerow(row)
                        row_total += 1

            offset += chunksize

        f.close()
        # I know that execution is at least reaching this point.  I can watch the CSV files grow as more and more 
        # rows are added to the for all the packages What I never get are either the success message or error message
        # below, and there are never any entries placed in the tables 
        query = f"BULK INSERT {profiles['system'].database.split('_')[0]}_{profiles['system'].database.split('_')[1]}_test_{profiles['system'].database.split('_')[2]}.{package.destTable} FROM \"{path}\" WITH (FIELDTERMINATOR='^', ROWTERMINATOR='\\n');"
        engine.cursor().execute(query)
        engine.commit()

        end_time = time.monotonic() - started_at
        print(
            f"{package.packageName} has completed.  Total rows inserted: {row_total}.  Total execution time: {end_time} seconds\n")
        os.remove(path)
    except Exception as e:

        print(f'An error has occured for package {package.packageName}.\r\n {repr(e)}')

    finally:
        # Here is where I am trying to add an item to the queue so the get method in the main def will pick it up and
        # remove it from the queue 
        q.put(f'{package.packageName} has completed')
        if oracle_connection:
            oracle_connection.close()
        if engine:
            engine.cursor().close()
            engine.close()


if __name__ == '__main__':
    # Setting mp creation type
    ctx = mp.get_context('spawn')
    q = ctx.Queue()

    # For the Etl I generate a list of class objects that hold relevant information profs contains a list of 
    # connection objects (credentials, connection strings, etc) packages contains the information to run the extract 
    # (destination tables, query string, package name for logging, etc) 
    profs = profiles.get_conn_vars(sys.argv[1])
    packages = packages.get_etl_packages(profs)

    processes = []
    # I'm trying to track both individual package execution time and overall time so I can get an estimate on rows 
    # per second 
    start_time = time.monotonic()

    sqlConn = connections.getSQLConnection(profs['system'])
    # Here I'm executing a SQL command to truncate all my staging tables to ensure they are empty and will not 
    # generate any key violations 
    sqlConn.execute(
        f"USE[{profs['system'].database.split('_')[0]}_{profs['system'].database.split('_')[1]}_test_{profs['system'].database.split('_')[2]}]\r\nExec Sp_msforeachtable @command1='Truncate Table ?',@whereand='and Schema_Id=Schema_id(''my_schema'')'")

    # Here is where I start generating a process per package to try and get all packages to run simultaneously
    for package in packages:
        p = ctx.Process(target=execute_extract, args=(package, profs, q,))
        processes.append(p)
        p.start()

    # Here is my attempt at managing the queue.  This is a monstrosity of fixes I've tried to get this to work
    results = []
    while True:
        try:
            result = q.get(False, 0.01)
            results.append(result)
        except queue.Empty:
            pass
        allExited = True
        for t in processes:
            if t.exitcode is None:
                allExited = False
                break
        if allExited & q.empty():
            break

    for p in processes:
        p.join()

    # Closing out the end time and writing the overall execution time in minutes.
    end_time = time.monotonic() - start_time
    print(f'Total execution time of {end_time / 60} minutes.')

我不能确定你为什么会遇到死锁(我根本不相信它与你的队列管理有关),但我可以肯定地说,如果你做两件事之一,你可以简化你的队列管理逻辑:

方法一

确保您的工作人员 function, execute_extract即使在出现异常的情况下也会在结果队列中放置一些东西(我建议放置Exception object 本身)。 然后,以while True:开头的整个主流程循环可以替换为:

results = [q.get() for _ in range(len(processes))]

您可以保证队列上的消息数量固定,等于创建的进程数。

方法2(更简单)

只需颠倒您等待子流程完成并处理结果队列的顺序。 您不知道队列中有多少消息,但在所有进程都返回之前您不会处理队列。 因此,无论队列中有多少消息,您都将得到。 只需检索它们,直到队列为空:

for p in processes:
    p.join()
results = []
while not q.empty():
    results.append(q.get())

在这一点上,我通常建议您使用多处理池 class,例如multiprocessing.Pool ,它不需要显式队列来检索结果。 但是进行这些更改中的任何一个(我建议使用方法 2,因为我看不出它如何导致死锁,因为此时只有主进程正在运行)并查看您的问题是否消失。 但是,我保证您的问题不在代码中的其他地方。 虽然您的代码过于复杂且效率低下,但它显然不是“错误的”。 至少你会知道你的问题是否在其他地方。

我的问题是:使用通过ctx = mp.get_context('spawn')获取的上下文而不是仅调用multiprocessing模块本身的方法对你有什么好处? 如果您的平台支持fork调用,这将是默认上下文,您不想使用它吗?

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM