[英]Python: Deadlock of a single lock in multiprocessing
I'm using pyserial to acquire data with multiprocessing. 我正在使用pyserial通过多处理获取数据。 The way I share data is very simple.
我共享数据的方式非常简单。 So:
所以:
I have member objects in my class: 我班上有成员对象:
self.mpManager = mp.Manager()
self.shared_return_list = self.mpManager.list()
self.shared_result_lock = mp.Lock()
I call my multiprocessing process this way: 我这样称呼我的多处理过程:
process = mp.Process(target=do_my_stuff,
args=(self.shared_stopped, self.shared_return_list, self.shared_result_lock)
)
where do_my_stuff
is a global function. 其中
do_my_stuff
是全局函数。
Now The part the fills the list in the process function: 现在,该部分将填充过程功能列表:
if len(acqBuffer) > acquisitionSpecs["LengthToPass"]:
shared_lock.acquire()
shared_return_list.extend(acqBuffer)
del acqBuffer[:]
shared_lock.release()
And the part that takes that to the local thread for use is: 将其带到本地线程使用的部分是:
while len(self.acqBuffer) <= 0 and (not self.stopped):
#copy list from shared buffer and empty it
self.shared_result_lock.acquire()
self.acqBuffer.extend(self.shared_return_list)
del self.shared_return_list[:]
self.shared_result_lock.release()
The problem : 问题 :
Although there's only 1 lock, my program is occasionally ending in a deadlock somehow! 尽管只有1个锁,但是我的程序有时还是以某种方式死于死锁! After waiting some time, my program freezes.
等待一段时间后,我的程序冻结。 After adding prints before and after the locks, I found that it freezes at a lock and reaches a deadlock somehow.
在锁之前和之后添加打印后,我发现它在锁处冻结并以某种方式达到死锁。
If I use a recursive lock, RLock()
, it works with no problems. 如果我使用递归锁
RLock()
,它将正常工作。 Not sure whether I should do that. 不知道我是否应该这样做。
How is this possible? 这怎么可能? Am I doing something wrong?
难道我做错了什么? I expect if both processes try to acquire the lock, they should block until the other process unlocks the lock.
我希望如果两个进程都试图获取该锁,它们应该阻塞直到另一个进程解锁该锁。
Without having a SSCCE , it's difficult to know if there's something else going on in your code or not. 如果没有SSCCE ,很难知道代码中是否还有其他内容。
One possibility is that there is an exception thrown after a lock is acquired. 一种可能性是在获取锁后会引发异常。 Try wrapping each of your locked sections in a try/finally clause.
尝试将每个锁定的节包装在try / finally子句中。 Eg.
例如。
try:
shared_lock.acquire()
shared_return_list.extend(acqBuffer)
del acqBuffer[:]
finally:
shared_lock.release()
and: 和:
try:
self.shared_result_lock.acquire()
self.acqBuffer.extend(self.shared_return_list)
del self.shared_return_list[:]
finally:
self.shared_result_lock.release()
You could even add exception clauses, and log any exceptions raised, if this turns out to be the issue. 您甚至可以添加异常子句,并记录引发的所有异常(如果这确实是问题所在)。
It turned out it's not a deadlock. 事实证明这不是僵局。 My fault!
我的错! The problem was that the data acquired from the device is sometimes so huge that copying the data through
问题是从设备获取的数据有时非常庞大,以至于无法通过
shared_return_list.extend(acqBuffer)
del acqBuffer[:]
Takes a very long time that the program freezes. 该程序冻结需要很长时间。 I solved this issue by moving data in chunks and by limiting the amount of data to be pulled from the device.
我通过逐块移动数据并限制要从设备中提取的数据量来解决此问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.