简体   繁体   中英

How to handle image size variation in Deep Learning?

I am working on a image classification model which classify images into 5 categories. I have 5000 training images data stored in a folder but all images are of different heigth and width . like this -

'631.jpg': {'width': 81, 'heigth': 25},
'8595.jpg': {'width': 1173, 'heigth': 769},
'284.jpg': {'width': 94, 'heigth': 75},
'5999.jpg': {'width': 4220, 'heigth': 1951}
  

Can anyone tell me about any technique to handle this kind of data?

tf.image.resize_with_crop_or_pad(image, desired_height, desired_width)

Images smaller than desired_height and desired_width will be padded, and those larger will be centrally cropped.

import tensorflow as tf
import matplotlib.pyplot as plt

_, ((first, *rest), _) = tf.keras.datasets.cifar10.load_data()

modified = tf.image.resize_with_crop_or_pad(first[None, ...]/255, 48, 48)

plt.imshow(tf.squeeze(modified))
plt.show()

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM