[英]cannot pickle 'weakref' object in python
I'm trying to run the following code, it's main aim is to do much more complex procedures , I simplified it with two loops.我正在尝试运行以下代码,其主要目的是执行更复杂的程序,我用两个循环对其进行了简化。
I keep getting the errors below: (I searched the error with no luck)我不断收到以下错误:(我没有运气搜索错误)
from multiprocessing import Process
def load():
for i in range(0, 10000):
print("loadddinngg", i)
def copy(p1):
# fetch all files
while p1.is_alive():
for i in range(0, 100000):
print("coppppyyy", i)
class multithreading:
def __init__(self):
p1 = Process(target=load, args=())
p2 = Process(target=copy, args=( p1,))
p1.start()
p2.start()
p1.join()
p2.join()
File "C:\Users\untitled10\toDelete.py", line 19, in __init__
p2.start()
File "C:\Program Files\Python38_64bit\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Program Files\Python38_64bit\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python38_64bit\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Program Files\Python38_64bit\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files\Python38_64bit\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'weakref' object
The args
to a Process object are pickled in order to send that information to the new process (so that it can unpickle and recreate the same Python objects in its own address space). Process 对象的args
被腌制,以便将该信息发送到新进程(以便它可以在自己的地址空间中取消腌制并重新创建相同的 Python 对象)。 Process objects cannot themselves be pickled.进程对象本身不能被腌制。
Separately, as stated in the documentation :另外,如文档中所述:
Note that the start(), join(), is_alive(), terminate() and exitcode methods should only be called by the process that created the process object.请注意,start()、join()、is_alive()、terminate() 和 exitcode 方法只能由创建进程对象的进程调用。
We therefore cannot expect to use is_alive
within the spawned processes.因此,我们不能期望在衍生进程中使用is_alive
。
To communicate between processes, use multiprocessing.Queue
.要在进程之间进行通信,请使用multiprocessing.Queue
。 For this specific case, we want p2
to stop when p1
is done.对于这种特定情况,我们希望p2
在p1
完成时停止。 That means, p1
has to tell p2
that it is just about to finish, and then p2
receives that message and responds by stopping.这意味着, p1
必须告诉p2
它即将结束,然后p2
接收到该消息并通过停止来响应。
A trivial example might look like:一个简单的示例可能如下所示:
from multiprocessing import Process, Queue
from queue import Empty # exception raised when the queue is empty
import sys
def load(q):
do_some_work()
# the actual object that we `put` in the Queue
# doesn't matter for this trivial example.
q.put(None)
def copy(q):
while True:
do_some_other_work()
try:
q.get()
except Empty:
pass # the message wasn't sent yet.
else:
return # it was sent, so stop this thread too.
if __name__ == '__main__':
q = Queue()
p1 = Process(target=load, args=(q,))
p2 = Process(target=copy, args=(q,))
p1.start()
p2.start()
p1.join()
p2.join()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.