I am using a class to create a tensorflow model. Within a for loop, I am creating an instance which I must delete at the end of each iteration in order to free up memory. Deletion does not work and I am running out of memory. Here is a minimal example of what I tried:
import numpy as np
class tfModel(self, x):
def __init__(self, x):
....
def predict(self, x):
...
return x_new
if __name__=="__main__":
x = np.ones(100)
for i in range(0, 3):
model = tfModel(x)
x = model.predict(x)
del model
I've read in related questions that "del" only deletes a reference, not the class instance itself. But how can I ensure that all references are deleted and the instance can be garbage collected?
I think you are talking about two things:
x
) or each batch of examples and feed them into the model to get prediction. The result could be serialized to disk when necessary if your memory cannot hold all results.More concretely, something like this:
class tfModel(self):
def __init__(self):
....
def predict(self, x):
...
return x_new
def my_x_generator():
for x in range(100):
yield x
THRESHOLD = 16
if __name__=="__main__":
model = tfModel()
my_result_buffer = []
for x in my_x_generator():
x_pred = model.predict(x)
my_result_buffer.append(x_pred)
if len(my_result_buffer) > THRESHOLD:
## serialize my_result_buffer to disk
my_result_buffer = []
Also note that in my sample code above:
tfModel
should not depend on x
. ( x
is removed from __init__
). Of course, you could use model parameters to initialize your model.It seems to be a specific tensorflow problem. Using the module multiprocessing, one can generate processes within a for loop. The processes are closed when finished and the memory is freed.
I found this solution here: Clearing Tensorflow GPU memory after model execution
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.