简体   繁体   English

使用Web服务时无法腌制_thread.RLock对象

[英]can't pickle _thread.RLock objects when using a webservice

I am using python 3.6 我正在使用python 3.6

I am trying to use multiprocessing from inside a class method shown below by the name SubmitJobsUsingMultiProcessing() which further calls another class method in turn. 我正在尝试从一个名称为SubmitJobsUsingMultiProcessing()的类方法中使用多重处理,该类进一步依次调用另一个类方法。

I keep running into this error : Type Error : can't pickle _thread.RLock objects. 我不断遇到此错误:类型错误:无法腌制_thread.RLock对象。

I have no idea what this means. 我不知道这是什么意思。 I have a suspicion that the below line trying to establish a connection to a webserver API might be responsible but I am all at sea to understand why. 我怀疑以下尝试建立与Web服务器API的连接的行为可能是负责任的,但我全是为了了解原因。

I am not a proper programmer(code as a part of a portfolio modeling team) so if this is an obvious question please pardon my ignorance and many thanks in advance. 我不是一个合适的程序员(代码是项目组合建模团队的一部分),所以如果这是一个明显的问题,请原谅我的无知,并在此先感谢您。

import multiprocessing as mp,functools

def SubmitJobsUsingMultiProcessing(self,PartitionsOfAnalysisDates,PickleTheJobIdsDict = True):
    if (self.ExportSetResult == "SUCCESS"):
        NumPools = mp.cpu_count()
        PoolObj =  mp.Pool(NumPools)   
        userId,clientId,password,expSetName = self.userId , self.clientId , self.password , self.expSetName
        PartialFunctor = functools.partial(self.SubmitJobsAsOfDate,userId = userId,clientId = clientId,password = password,expSetName = expSetName)
        Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
        BatchJobIDs = OrderedDict((key, val) for Dct in Result for key, val in Dct.items())
        f_pickle = open(self.JobIdPickleFileName, 'wb')
        pickle.dump(BatchJobIDs, f_pickle, -1)
        f_pickle.close()


 def SubmitJobsAsOfDate(self,ListOfDatesForBatchJobs,userId,clientId,password,expSetName):

    client = Client(self.url, proxy=self.proxysettings)
    if (self.ExportSetResult != "SUCCESS"):
        print("The export set creation was not successful...exiting")
        sys.exit()

    BatchJobIDs = OrderedDict()
    NumJobsSubmitted = 0
    CurrentProcessID = mp.current_process()

    for AnalysisDate in ListOfDatesForBatchJobs:
        jobName = "Foo_" + str(AnalysisDate)
        print('Sending job from process : ', CurrentProcessID, ' : ', jobName)
        jobId = client.service.SubmitExportJob(userId,clientId,password,expSetName, AnalysisDate, jobName, False)
        BatchJobIDs[AnalysisDate] = jobId
        NumJobsSubmitted += 1

        'Sleep for 30 secs every 100 jobs'
        if (NumJobsSubmitted % 100 == 0):
            print('100 jobs have been submitted thus far from process : ', CurrentProcessID,'---Sleeping for 30 secs to avoid the SSL time out error')
            time.sleep(30)
    self.BatchJobIDs = BatchJobIDs
    return BatchJobIDs

Below is the trace :: 下面是跟踪::

    Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1599, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1026, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 289, in <module>
    BDTProcessObj.SubmitJobsUsingMultiProcessing(Partitions)
  File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 190, in SubmitJobsUsingMultiProcessing
    Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 644, in get
    raise self._value
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 424, in _handle_tasks
    put(task)
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects

I am struggling with a similar problem. 我正在努力解决类似的问题。 There was a bug in <=3.5 whereby _thread.RLock objects did not raise an error when pickled (They cannot be) For the Pool object to work, a function and arguments must be passed to it from the main process and this relies on pickling (pickling is a means of serialising objects) In my case the RLock object is somewhere in the logging module. <= 3.5中存在一个错误,其中_thread.RLock对象在被酸洗时不会引发错误(它们不能被错误处理)为了使Pool对象起作用,必须从主进程中将函数和参数传递给它,这依赖于酸洗(腌制是一种序列化对象的方法)在我的情况下,RLock对象位于日志记录模块中的某个位置。 I suspect your code will work fine on 3.5. 我怀疑您的代码在3.5上能正常工作。 Good luck. 祝好运。 See this bug resolution. 请参阅此错误解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Tensflow Keras: TypeError: can't pickle _thread.RLock objects when using multiprocessing - Tensflow Keras: TypeError: can't pickle _thread.RLock objects when using multiprocessing 类型错误:无法pickle _thread.RLock 对象 - TypeError: can't pickle _thread.RLock objects TypeError:无法在 python 3 中腌制 _thread.RLock 对象 - TypeError: can't pickle _thread.RLock objects in python 3 “TypeError: can&#39;t pickle _thread.RLock objects”,同时使用 pickle 保存 Facebook Prophet 模型 - "TypeError: can't pickle _thread.RLock objects" while saving Facebook Prophet model using pickle 获取 TypeError:无法pickle _thread.RLock 对象 - Getting TypeError: can't pickle _thread.RLock objects TypeError: can&#39;t pickle _thread.RLock objects in pandas with multiprocessing - TypeError: can't pickle _thread.RLock objects in pandas with multiprocessing 使用pickle保存keras模型时面临“无法pickle _thread.rlock对象”错误 - Facing 'can't pickle _thread.rlock objects' error while saving keras model using pickle TypeError: can't pickle _thread.RLock objects (Deep Learning) - TypeError: can't pickle _thread.RLock objects ( Deep Learning) 尝试存储神经网络时获取“无法腌制 _thread.RLock 对象” - Getting 'can't pickle _thread.RLock objects' when trying to store a neural network 使用Pool的Tensorflow错误:无法腌制_thread.RLock对象 - Tensorflow Error using Pool: can't pickle _thread.RLock objects
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM