简体   繁体   中英

Warning `tried to deallocate nullptr` when using tensorflow eager execution with tf.keras

As per the tensorflow team suggestion, I'm getting used to tensorflow's eager execution with tf.keras. However, whenever I train a model, I receive a warning (EDIT: actually, I receive this warning repeated many times, more than once per training step, flooding my standard output):

E tensorflow/core/common_runtime/bfc_allocator.cc:373] tried to deallocate nullptr

The warning doesn't seem to affect the quality of the training but I wonder what it means and if it is possible to get rid of it.

I use a conda virtual environment with python 3.7 and tensorflow 1.12 running on a CPU. (EDIT: a test with python 3.6 gives the same results.) A minimal code that reproduces the warnings follows. Interestingly, it is possible to comment the line tf.enable_eager_execution() and see that the warnings disappear.

import numpy as np
import tensorflow as tf

tf.enable_eager_execution()
N_EPOCHS = 50
N_TRN = 10000
N_VLD = 1000

# the label is positive if the input is a number larger than 0.5
# a little noise is added, just for fun
x_trn = np.random.random(N_TRN)
x_vld = np.random.random(N_VLD)
y_trn = ((x_trn + np.random.random(N_TRN) * 0.02) > 0.5).astype(float)
y_vld = ((x_vld + np.random.random(N_VLD) * 0.02) > 0.5).astype(float)

# a simple logistic regression
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, input_dim=1))
model.add(tf.keras.layers.Activation('sigmoid'))

model.compile(
    optimizer=tf.train.AdamOptimizer(),
    # optimizer=tf.keras.optimizers.Adam(),  # doesn't work at all with tf eager execution
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Train model on dataset
model.fit(
    x_trn, y_trn,
    epochs=N_EPOCHS,
    validation_data=(x_vld, y_vld),
)
model.summary()

Quick solutions:

  • It did not appear when I ran the same script in TF 1.11 while the optimization was performed to reach the same final validation accuracy on a synthetic dataset.

    OR

  • Suppress the errors/warning using the native os module (adapted from https://stackoverflow.com/a/38645250/2374160 ). ie; by setting the Tensorflow logging environment variable to not show any error messages.

      import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf 

More info:

  • Solving this error in the correct way may require familiarity with MKL library calls and its interfacing on Tensorflow which is written in C (this is beyond my current TF expertise)

  • In my case, this memory deallocation error occurred whenever the apply_gradients() method of an optimizer was called. In your script, it is called when the model is being fitted to the training data.

  • This error is raised from here: tensorflow/core/common_runtime/mkl_cpu_allocator.h

I hope this helps as a temporary solution for convenience.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM