简体   繁体   English

控制功能的最大递归深度

[英]Control maximum recursion depth of function

I wrote an object to manage python process. 我写了一个对象来管理python进程。 This manager keep alive these process and communicate with pipes. 该管理器使这些过程保持活跃并与管道进行通信。 To do that a function call itself recursively while process have thigs to do. 要做到这一点,函数可以递归地调用自身,而进程则需要做一些事情。 But, naturally python raise an RuntimeError: maximum recursion depth exceeded while calling a Python object after X call. 但是,很自然地,Python会引发RuntimeError: maximum recursion depth exceeded while calling a Python object在X调用之后RuntimeError: maximum recursion depth exceeded while calling a Python objectRuntimeError: maximum recursion depth exceeded while calling a Python object

I can modify it with sys.setrecursionlimit(x) but it is not clean (applyed on entire program) ... How can i contrôle the maximum recursion depth of this function ? 我可以使用sys.setrecursionlimit(x)对其进行修改,但它并不干净(应用于整个程序)...我如何控制此函数的最大递归深度

My program: 我的程序:

processmanager.py processmanager.py

import sys

if sys.version_info < (3, 3):
  sys.stdout.write("Python 3.3 required\n")
  sys.exit(1)

from multiprocessing import Process, Pipe
from multiprocessing.connection import wait

def chunk(seq,m):
   i,j,x=len(seq),0,[]
   for k in range(m):
     a, j = j, j + (i+k)//m
     x.append(seq[a:j])
   return x

class KeepedAliveProcessManager(object):

  def __init__(self, nb_process, target):
    self.processs = []
    self.target = target
    self.nb_process = nb_process
    self.readers_pipes = []
    self.writers_pipes = []

  def _start(self, chunked_things):
    for i in range(self.nb_process):
      local_read_pipe, local_write_pipe = Pipe(duplex=False)
      process_read_pipe, process_write_pipe = Pipe(duplex=False)
      self.readers_pipes.append(local_read_pipe)
      self.writers_pipes.append(process_write_pipe)
      p = Process(target=run_keeped_process, args=(self.target, local_write_pipe, process_read_pipe, chunked_things[i]))
      p.start()
      self.processs.append(p)
      local_write_pipe.close()
      process_read_pipe.close()

  def stop(self):
    for writer_pipe in self.writers_pipes:
      writer_pipe.send('stop')

  def get_their_work(self, things_to_do):
    chunked_things = chunk(things_to_do, self.nb_process)
    if not self.processs:
      self._start(chunked_things)
    else:
      for i in range(self.nb_process):
        #print('send things')
        self.writers_pipes[i].send(chunked_things[i])
    things_done_collection = []
    reader_useds = []
    while self.readers_pipes:
      for r in wait(self.readers_pipes):
        try:
          things_dones = r.recv()
        except EOFError:
          reader_useds.append(r)
          self.readers_pipes.remove(r)
        else:
          reader_useds.append(r)
          self.readers_pipes.remove(r)
          things_done_collection.append(things_dones)
    self.readers_pipes = reader_useds
    return things_done_collection

def run_keeped_process(target, main_write_pipe, process_read_pipe, things):
  things_dones = target(things)
  main_write_pipe.send(things_dones)
  del things_dones
  del things

  new_things = None
  readers = [process_read_pipe]
  readers_used = []
  while readers:
    for r in wait(readers):
      try:
        new_things = r.recv()
        #print('p: things received')
      except EOFError:
        pass
      finally:
        readers.remove(r)
  #print('p: continue')
  if new_things != 'stop':
    run_keeped_process(target, main_write_pipe, process_read_pipe, new_things)

main.py main.py

from processmanager import KeepedAliveProcessManager

def do_things_in_process(things_to_do = []):
  return [i ** 12 for i in things_to_do] 

process_manager = KeepedAliveProcessManager(2, do_things_in_process)
for i in range(1000):
  print(process_manager.get_their_work([0,1,2,3]))
process_manager.stop()

The Maximum recursion is here: 最大递归在这里:

[...]
File "/home/bux/Projets/simtermites/sandbox/parallel/processmanager.py", line 118, in run_keeped_process
    run_keeped_process(target, main_write_pipe, process_read_pipe, new_things)
  File "/home/bux/Projets/simtermites/sandbox/parallel/processmanager.py", line 118, in run_keeped_process
    run_keeped_process(target, main_write_pipe, process_read_pipe, new_things)
  File "/home/bux/Projets/simtermites/sandbox/parallel/processmanager.py", line 100, in run_keeped_process
    main_write_pipe.send(things_dones)
  File "/usr/lib/python3.3/multiprocessing/connection.py", line 206, in send
    ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)
  File "/usr/lib/python3.3/multiprocessing/forking.py", line 40, in __init__
    Pickler.__init__(self, *args)
RuntimeError: maximum recursion depth exceeded while calling a Python object
def run_keeped_process(target, main_write_pipe, process_read_pipe, things):
  #do some stuff here
  if new_things != 'stop':
    run_keeped_process(target, main_write_pipe, process_read_pipe, new_things)

This function looks like it could be changed so it isn't recursive at all. 该函数似乎可以更改,因此根本不是递归的。

def run_keeped_process(target, main_write_pipe, process_read_pipe, things):
  while True:
    #do some stuff here
    if new_things == 'stop':
      break
    things = new_things

Now you'll never hit the maximum recursion depth. 现在,您永远不会达到最大递归深度。

Pass a depthCount parameter and decrement it with every recursive call. 传递一个depthCount参数,并在每次递归调用时将其递减。 Don't call if parameter < 0. 如果参数<0,则不要调用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM