[英]How to change batch size in VGG16?
如何更改 VGG16 中的批量大小? 我試圖通過這樣做來解決超過 memory 約束 10% 的問題。
錯誤:
2021-12-03 16:17:07.263665: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 4888553472 exceeds 10% of free system memory.
這是我的代碼:
def labelObjectFromImage(image_path, directory_filename):
img = cv2.imread(image_path+directory_filename)
height = img.shape[0]
width = img.shape[1]
channels = img.shape[2]
img = load_img(image_path+directory_filename, target_size=(height, width))
model = VGG16(weights="imagenet", include_top = False, input_shape = (height, width, channels))
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
yhat = model.predict(img)
label = decode_predictions(yhat)
label = label[0][0]
print(label)
我嘗試將 model.predict 更改為:
yhat = model.predict(img, batch_size=1)
但它似乎對嘗試解決問題沒有任何影響
我嘗試使用:
from tensorflow.keras import backend as K
K.clear_session()
但這並沒有幫助
我跑了
pip3 uninstall tensorflow-gpu
然后通過安裝正常的 tensorflow
pip3 install tensorflow
但這並沒有幫助
僅供參考,到目前為止,我在所有這些嘗試中都遇到了同樣的錯誤。
正如建議的那樣,我嘗試了:
img_resized = tf.image.resize(img, (height, width))
但我現在收到以下錯誤:
Traceback (most recent call last):
File "organizeSpreadsheet.py", line 105, in <module>
main()
File "organizeSpreadsheet.py", line 86, in main
objects_from_image = labelObjectFromImage(path_to_images, directory_filename)
File "organizeSpreadsheet.py", line 53, in labelObjectFromImage
img = img_resized.reshape((1, height, width, channels))
File "/home/jr/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 437, in __getattr__
raise AttributeError("""
AttributeError:
'EagerTensor' object has no attribute 'reshape'.
If you are looking for numpy-related methods, please run the following:
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
我知道我沒有完全按照指示做,但它拋出了一個錯誤,所以我遵循了這個建議
我通過進行以下更改更正了錯誤:
def labelObjectFromImage(image_path, directory_filename):
scale = 60
img = cv2.imread(image_path+directory_filename)
height = int(img.shape[0] * scale / 100)
width = int(img.shape[1] * scale / 100)
channels = img.shape[2]
#img_resized = cv2.resize(img, dim, interpolation=cv2.INTER_AREA)
#img_resized = tf.image.resize(img, (height, width))
tf.image.resize(img, (height, width))
#img = load_img(image_path+directory_filename, target_size=(height, width))
model = VGG16(weights="imagenet", include_top = False, input_shape = (height, width, channels))
#img = img_to_array(img)
#img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = img.reshape((1, height, width, channels))
img = preprocess_input(img)
yhat = model.predict(img, batch_size=1)
label = decode_predictions(yhat)
label = label[0][0]
print(label)
但現在我得到了錯誤:
ValueError: cannot reshape array of size 63483840 into shape (1,3384,2251,3)
我認為這可以通過嘗試多個尺度來解決,對嗎?
您已經在使用batch_size = 1。
tf.image.resize(image, [small_height,small_width,N_channels])
進行預測之前嘗試調整圖像大小
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.