简体   繁体   中英

Jupyter Notebook GPU memory release after training model

How could we clear up the GPU memory after finishing a deep learning model training with Jupyter notebook. The problem is, no matter what framework I am sticking to (tensorflow, pytorch) the memory stored in the GPU do not get released except I kill the process manually or kill the kernel and restart the Jupyter. Do you have any idea how we can possible get rid of this problem by automating the steps?

The only walkaround I found was to use threading. Executing the Training with a subprocess.

An example:

def Training(arguments):
    ....
    ....
    return model   

if __name__=='__main__':
Subprocess = Process(# The complete function defined above
                     target = Training,
                     # Pass the arguments defined in the complete function 
                     # above.
                     # Note - The comma after the arguments : In order for 
                     # Python
                     # to understand this is a tuple
                     args = (arguments, ))

# Starting the defined subprocess
Subprocess.start()
# Wait for the subprocess to get completed
Subprocess.join()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM