简体   繁体   中英

Having child-processes allow rpc-server to restart while children survive

Scenario

I have a rpc-server that needs to spawn important processes ( multiprocessing.Process ) that last for several days. For security/safety reasons, I don't want these processes survival to depend on the rpc-server. Therfore, I want the server to be able to die and be able to reboot while the processes are running.

Orphaning processes

This problem is solvable by ( don't paste it where you don't want to loose previous work, it will close your python session ):

import os
import multiprocessing
import time

def _job(data):
    for _ in range(3):
        print multiprocessing.current_process(), "is working"
        time.sleep(2)
    print multiprocessing.current_process(), "is done"

#My real worker gets a Connection-object as part of a
#multiprocessing.Pipe among other arguments
worker = multiprocessing.Process(target=_job, args=(None,))
worker.daemon = True
worker.start()
os._exit(0)

Problem: Closing the socket of the rpc-server if worker is alive

Exiting the main process seems not to aid or effect the closing of the socket-issue. So for illustrating the problem the server reboot, it is simulated with starting a second server with identical parameters after the first has been closed .

The following works perfectly:

import SimpleXMLRPCServer
HOST = "127.0.0.1"
PORT = 45212
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
s.server_close()

However, if a worker is created, it raises a socket.error saying the socket is already in use:

s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job, args=(None,))
worker.start()
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT)) #raises socket.error
worker.join()
s.server_close()

A manual closing of the servers socket does work:

import socket
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job, args=(None,))
worker.start()
s.socket.shutdown(socket.SHUT_RDWR)
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker.join()
s.server_close()

But this behavior really worries me. I don't pass the socket in any way to the worker, but it appears as if it gets a hold of it anyhow.

There are similar questions previously posted, but they tend to pass the socket through to the worker, which is not intended here. If I send the socket through though, I can close it in the worker and get around the shutdown hack:

def _job2(notMySocket):
    notMySocket.close()
    for _ in range(3):
        print multiprocessing.current_process(), "is working"
        time.sleep(2)
    print multiprocessing.current_process(), "is done"

s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job2, args=(s.socket,))
worker.start()
time.sleep(0.1) #Just to be sure worker gets to close socket in time
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT)) 
worker.join()
s.server_close()

But the server's socket has absolutely no reason to visit the worker. I don't like this solution a bit, even if it's the best one so far.

Question

Is there a way of limiting what gets forked when using multiprocessing.Process so only that which I want to pass to the target gets copied, and not all open sockets and other stuff?

In my case, to get this code working:

s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
childPipe, parentPipe = multiprocessing.Pipe()
worker = multiprocessing.Process(target=_job, args=(childPipe,))
worker.start()
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT)) #raises socket.error
worker.join()
s.server_close()

If you're using Python 2.x, I don't think there's any way to avoid that inheritence on Posix platforms. os.fork will always be used to create the new process, which means the entire state of the parent process will be copied to the child. All you can do is immediately close the socket in the child, which is what you're already doing. The only way you could avoid this inheritence is by starting the processes before you start the server. You may be able to do this by starting the Process early and then using a multiprocessing.Queue to deliver work items (instead of the args keyword argument) or a multiprocessing.Event to indicate that it should actually start working. This may or may not actually be possible with your use-case, depending on what you need to send to the child process.

However, if you're using Python 3.4+ (or can migrate to 3.4+), you can use the spawn or forkserver contexts to avoid having the socket be inherited.

spawn

The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver.

Available on Unix and Windows. The default on Windows.

forkserver

When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.

Example:

def _job2():
    for _ in range(3):
        print multiprocessing.current_process(), "is working"
        time.sleep(2)
    print multiprocessing.current_process(), "is done"

ctx = multiprocessing.get_context('forkserver')
worker = ctx.Process(target=_job2)
worker.start()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM