[英]How to change batch size in VGG16?
如何更改 VGG16 中的批量大小? 我试图通过这样做来解决超过 memory 约束 10% 的问题。
错误:
2021-12-03 16:17:07.263665: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 4888553472 exceeds 10% of free system memory.
这是我的代码:
def labelObjectFromImage(image_path, directory_filename):
img = cv2.imread(image_path+directory_filename)
height = img.shape[0]
width = img.shape[1]
channels = img.shape[2]
img = load_img(image_path+directory_filename, target_size=(height, width))
model = VGG16(weights="imagenet", include_top = False, input_shape = (height, width, channels))
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
yhat = model.predict(img)
label = decode_predictions(yhat)
label = label[0][0]
print(label)
我尝试将 model.predict 更改为:
yhat = model.predict(img, batch_size=1)
但它似乎对尝试解决问题没有任何影响
我尝试使用:
from tensorflow.keras import backend as K
K.clear_session()
但这并没有帮助
我跑了
pip3 uninstall tensorflow-gpu
然后通过安装正常的 tensorflow
pip3 install tensorflow
但这并没有帮助
仅供参考,到目前为止,我在所有这些尝试中都遇到了同样的错误。
正如建议的那样,我尝试了:
img_resized = tf.image.resize(img, (height, width))
但我现在收到以下错误:
Traceback (most recent call last):
File "organizeSpreadsheet.py", line 105, in <module>
main()
File "organizeSpreadsheet.py", line 86, in main
objects_from_image = labelObjectFromImage(path_to_images, directory_filename)
File "organizeSpreadsheet.py", line 53, in labelObjectFromImage
img = img_resized.reshape((1, height, width, channels))
File "/home/jr/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 437, in __getattr__
raise AttributeError("""
AttributeError:
'EagerTensor' object has no attribute 'reshape'.
If you are looking for numpy-related methods, please run the following:
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
我知道我没有完全按照指示做,但它抛出了一个错误,所以我遵循了这个建议
我通过进行以下更改更正了错误:
def labelObjectFromImage(image_path, directory_filename):
scale = 60
img = cv2.imread(image_path+directory_filename)
height = int(img.shape[0] * scale / 100)
width = int(img.shape[1] * scale / 100)
channels = img.shape[2]
#img_resized = cv2.resize(img, dim, interpolation=cv2.INTER_AREA)
#img_resized = tf.image.resize(img, (height, width))
tf.image.resize(img, (height, width))
#img = load_img(image_path+directory_filename, target_size=(height, width))
model = VGG16(weights="imagenet", include_top = False, input_shape = (height, width, channels))
#img = img_to_array(img)
#img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = img.reshape((1, height, width, channels))
img = preprocess_input(img)
yhat = model.predict(img, batch_size=1)
label = decode_predictions(yhat)
label = label[0][0]
print(label)
但现在我得到了错误:
ValueError: cannot reshape array of size 63483840 into shape (1,3384,2251,3)
我认为这可以通过尝试多个尺度来解决,对吗?
您已经在使用batch_size = 1。
tf.image.resize(image, [small_height,small_width,N_channels])
进行预测之前尝试调整图像大小
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.