简体   繁体   English

来自Abaqus / CAE的Python多处理

[英]Python multiprocessing from Abaqus/CAE

I am using a commercial application called Abaqus/CAE 1 with a built-in Python 2.6 interpreter and API. 我正在使用一个名为Abaqus / CAE 1的商业应用程序,内置Python 2.6解释器和API。 I've developed a long-running script that I'm attempting to split into simultaneous, independent tasks using Python's multiprocessing module. 我已经开发了一个长期运行的脚本,我试图使用Python的multiprocessing模块分成同时独立的任务。 However, once spawned the processes just hang. 然而,一旦产生过程就会挂起。

The script itself uses various objects/methods available only through Abaqus's proprietary cae module, which can only be loaded by starting up the Python bundled with Abaqus/CAE first, which then executes my script with Python's execfile . 该脚本本身仅使用Abaqus专有的cae模块提供的各种对象/方法,只能通过首先启动与Abaqus / CAE捆绑的Python来加载,然后使用Python的execfile执行我的脚本。

To try to get multiprocessing working, I've attempted to run a script that avoids accessing any Abaqus objects, and instead just performs a calculation and prints the result to file 2 . 为了尝试使多处理工作,我试图运行一个避免访问任何Abaqus对象的脚本,而只是执行计算并将结果打印到文件2 This way, I can run the same script from the regular system Python installation as well as from the Python bundled with Abaqus. 这样,我可以从常规系统Python安装以及与Abaqus捆绑在一起的Python运行相同的脚本。

The example code below works as expected when run from the command line using either of the following: 使用以下任一方法从命令行运行时,下面的示例代码按预期工作:

C:\some\path>python multi.py         # <-- Using system Python
C:\some\path>abaqus python multi.py  # <-- Using Python bundled with Abaqus

This spawns the new processes, and each runs the function and writes the result to file as expected. 这会生成新进程,每个进程都会运行该函数并按预期将结果写入文件。 However, when called from the Abaqus/CAE Python environment using: 但是,从Abaqus / CAE Python环境调用时使用:

abaqus cae noGUI=multi.py

Abaqus will then start up, automatically import its own proprietary modules, and then executes my file using: 然后Abaqus将启动,自动导入自己的专有模块,然后使用以下命令执行我的文件:

execfile("multi.py", __main__.__dict__)

where the global namespace arg __main__.__dict__ is setup by Abaqus. 全局命名空间arg __main__.__dict__由Abaqus设置。 Abaqus then checks out licenses for each process successfully, spawns the new processes, and ... and that's it. 然后,Abaqus成功检查每个流程的许可证,生成新流程,......就是这样。 The processes are created, but they all hang and do nothing. 这些进程已创建,但它们都挂起并且什么都不做。 There are no error messages. 没有错误消息。

What might be causing the hang-up, and how can I fix it? 什么可能导致挂断,我该如何解决? Is there an environment variable that must be set? 是否有必须设置的环境变量? Are there other commercial systems that use a similar procedure that I can learn from/emulate? 是否有其他商业系统使用类似的程序,我可以从中学习/模仿?

Note that any solution must be available in the Python 2.6 standard library. 请注意,任何解决方案都必须在Python 2.6标准库中提供。

System details: Windows 10 64-bit, Python 2.6, Abaqus/CAE 6.12 or 6.14 系统详细信息:Windows 10 64位,Python 2.6,Abaqus / CAE 6.12或6.14

Example Test Script: 示例测试脚本:

# multi.py
import multiprocessing
import time

def fib(n):
    a,b = 0,1
    for i in range(n):
        a, b = a+b, a
    return a

def workerfunc(num):
    fname = ''.join(('worker_', str(num), '.txt'))
    with open(fname, 'w') as f:
        f.write('Starting Worker {0}\n'.format(num))
        count = 0
        while count < 1000:  # <-- Repeat a bunch of times.
            count += 1
            a=fib(20)
        line = ''.join((str(a), '\n'))
        f.write(line)
        f.write('End Worker {0}\n'.format(num))

if __name__ == '__main__':
    jobs = []
    for i in range(2):       # <-- Setting the number of processes manually
        p = multiprocessing.Process(target=workerfunc, args=(i,))
        jobs.append(p)
        print 'starting', p
        p.start()
        print 'done starting', p
    for j in jobs:
        print 'joining', j
        j.join()
        print 'done joining', j

1 A widely known finite element analysis package 1广为人知的有限元分析包

2 The script is a blend of a fairly standard Python function for fib() , and examples from PyMOTW 2该脚本是用于fib()的相当标准的Python函数和来自PyMOTW的示例的混合

I have to write an answer as I cannot comment yet. 我必须写一个答案,因为我还不能发表评论。

What I can imagine as a reason is that python multiprocessing spawns a whole new process with it's own non-shared memory. 我可以想象的一个原因是python多处理使用它自己的非共享内存产生了一个全新的进程。 So if you create an object in your script, the start a new process, that new process contains a copy of the memory and you have two objects that can go into different directions. 因此,如果您在脚本中创建一个对象,则启动一个新进程,该新进程包含一个内存副本,并且您有两个可以进入不同方向的对象。 When something of abaqus is present in the original python process (which I suspect) that gets copied too and this copy could create such a behaviour. 当原始python进程中存在一些abaqus(我怀疑)时,它也会被复制,这个副本可能会产生这种行为。

As a solution I think you could extend python with C (which is capable to use multiple cores in a single process) and use threads there. 作为一种解决方案,我认为您可以使用C扩展python (它可以在单个进程中使用多个核心)并在那里使用线程。

Just wanted to say that I have run into this exact issue. 只是想说我遇到了这个问题。 My solution at the current time is to compartmentalize my scripting. 我目前的解决方案是划分我的脚本。 This may work for you if you're trying to run parameter sweeps over a given model, or run geometric variations on the same model, etc. 如果您尝试在给定模型上运行参数扫描,或在同一模型上运行几何变化等,这可能对您有用。

I first generate scripts to accomplish each portion of my modelling process: 我首先生成脚本来完成我的建模过程的每个部分:

  1. Generate input file using CAE/Python. 使用CAE / Python生成输入文件。
  2. Extract data that I want and put it in a text file. 提取我想要的数据并将其放入文本文件中。

With these created, I use text replacement to quickly generate N python scripts of each type, one for each discrete parameter set I'm interested in. 通过这些创建,我使用文本替换来快速生成每种类型的N个python脚本,每个我感兴趣的离散参数集一个。

I then wrote a parallel processing tool in Python to call multiple Abaqus instances as subprocesses. 然后我在Python中编写了一个并行处理工具,将多个Abaqus实例作为子进程调用。 This does the following: 这样做如下:

  1. Call CAE through subprocess.call for each model generation script. 通过subprocess.call为每个模型生成脚本调用CAE。 The script allows you to choose how many instances to run at once to keep you from taking every license on the server. 该脚本允许您选择一次运行多少个实例,以防止您在服务器上获取每个许可证。

  2. Execute the Abaqus solver for the generated models using the same, with parameters for cores per job and total number of cores used. 使用相同的生成模型执行Abaqus解算器,每个作业的核心参数和使用的核心总数。

  3. Extract data using the same process as 1. 使用与1相同的过程提取数据。

There is some overhead in repeatedly checking out licenses for CAE when generating the models, but in my testing it is far outweighed by the benefit of being able to generate 10+ input files simultaneously. 在生成模型时反复检查CAE的许可证会有一些开销,但在我的测试中,能够同时生成10个以上输入文件的好处远远超过它。

I can put some of the scripts up on Github if you think the process outlined above would be helpful for your application. 如果您认为上面概述的过程对您的应用程序有帮助,我可以将一些脚本放在Github上。

Cheers, Nathan 干杯,内森

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM