繁体   English   中英

tensorflow.python.framework.errors_impl.ResourceExhaustedError: 分配内存失败 [Op:AddV2]

[英]tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2]

嗨,我是深度学习和张量流的初学者,

我创建了一个 CNN(你可以看到下面的模型)

model = tf.keras.Sequential()

model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", input_shape=[512, 640, 3]))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))

model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2, activation='softmax'))

optimizer = tf.keras.optimizers.SGD(learning_rate=0.2) #, momentum=0.9, decay=0.1)
model.compile(optimizer=optimizer, loss='mse', metrics=['accuracy'])

我尝试用 cpu 构建和训练它并成功完成(但非常慢)所以我决定安装 tensorflow-gpu。 按照https://www.tensorflow.org/install/gpu中的说明安装所有内容。

但是现在当我尝试构建模型时出现此错误:

> Traceback (most recent call last):   File
> "C:/Users/thano/Documents/Py_workspace/AI_tensorflow/fire_detection/main.py",
> line 63, in <module>
>     model = create_models.model1()   File "C:\Users\thano\Documents\Py_workspace\AI_tensorflow\fire_detection\create_models.py",
> line 20, in model1
>     model.add(tf.keras.layers.Dense(128, activation='relu'))   File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 530, in _method_wrapper
>     result = method(self, *args, **kwargs)   File "C:\Python37\lib\site-packages\keras\engine\sequential.py", line 217,
> in add
>     output_tensor = layer(self.outputs[0])   File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 977,
> in __call__
>     input_list)   File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 1115,
> in _functional_construction_call
>     inputs, input_masks, args, kwargs)   File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 848,
> in _keras_tensor_symbolic_call
>     return self._infer_output_signature(inputs, args, kwargs, input_masks)   File
> "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 886,
> in _infer_output_signature
>     self._maybe_build(inputs)   File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 2659,
> in _maybe_build
>     self.build(input_shapes)  # pylint:disable=not-callable   File "C:\Python37\lib\site-packages\keras\layers\core.py", line 1185, in
> build
>     trainable=True)   File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 663,
> in add_weight
>     caching_device=caching_device)   File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 818, in _add_variable_with_custom_getter
>     **kwargs_for_getter)   File "C:\Python37\lib\site-packages\keras\engine\base_layer_utils.py", line
> 129, in make_variable
>     shape=variable_shape if variable_shape else None)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 266, in __call__
>     return cls._variable_v1_call(*args, **kwargs)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 227, in _variable_v1_call
>     shape=shape)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 205, in <lambda>
>     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\variable_scope.py",
> line 2626, in default_variable_creator
>     shape=shape)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 270, in __call__
>     return super(VariableMetaclass, cls).__call__(*args, **kwargs)   File
> "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1613, in __init__
>     distribute_strategy=distribute_strategy)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1740, in _init_from_args
>     initial_value = initial_value()   File "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 517, in __call__
>     return self._random_generator.random_uniform(shape, -limit, limit, dtype)   File
> "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 973, in random_uniform
>     shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed)   File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
>     return target(*args, **kwargs)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\random_ops.py",
> line 315, in random_uniform
>     result = math_ops.add(result * (maxval - minval), minval, name=name)   File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
>     return target(*args, **kwargs)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py",
> line 3943, in add
>     return gen_math_ops.add_v2(x, y, name=name)   File "C:\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py",
> line 454, in add_v2
>     _ops.raise_from_not_ok_status(e, name)   File "C:\Python37\lib\site-packages\tensorflow\python\framework\ops.py",
> line 6941, in raise_from_not_ok_status
>     six.raise_from(core._status_to_exception(e.code, message), None)   File "<string>", line 3, in raise_from
> tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed
> to allocate memory [Op:AddV2]

任何想法可能是什么问题?

该错误告诉您它无法分配与您使用的一样多的 VRAM。 解决此类问题的最简单方法是将批量大小减少到适合 GPU VRAM 的数字。

您收到的错误消息tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2]可能表明您的 GPU 没有足够的内存用于您要运行的训练作业。 您使用的是什么 GPU,它有多少 vRAM?

当涉及到训练时的“内存不足”(OOM)错误时,最直接的做法是减少batch_size超参数

除了反复试验之外,没有直接的方法来确定训练时可以使用的最大batch_size适合 GPU 的可用 vRAM。 然而,一般规则是使用 2 的幂(例如81632 )。

由于这意味着内存不足的情况,您应该尝试的第一件事是减少批量大小。 如果您有非常大的训练数据集大小,也可能发生这种情况。 您可以尝试在训练数据子集上训练 model,看看是否有帮助。

如果你有很多训练样本,你可能会得到ResourceExhaustedError

来自tensorflowResourceExhaustedError

例如,如果每个用户的配额用尽,或者整个文件系统可能空间不足,则可能会引发此错误。

如何修复此错误:

  • 使用fit方法训练模型时设置较小的batch_size

batch_size :整数或无。 每次梯度更新的样本数。

这意味着 batch_size 越高,训练时需要的内存就越多。

  • 如果您使用的是Jupyter notebook ,请尝试重新启动内核

重新启动内核将重置您的笔记本并删除分配给您定义的变量或方法的所有内存!

在我的例子中, batch size不是问题。 我之前运行的脚本,即使脚本运行成功,GPU 内存仍然分配。 我使用nvidia-smi命令对此进行了验证,发现 15 GB 的 vram 中有 14 个被占用。 因此,要释放 vram,您可以运行以下脚本并尝试以相同的批处理大小再次运行您的代码。

from numba import cuda
cuda.select_device(0)
cuda.close()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM