简体   繁体   中英

How to convert a set of images into a numpy array?

I'm building my first Neural Network taking as example those on the book "Deep Learning with Python - Francois Chollet" and I immediately found my first issue. When the author imported the MNIST dataset he also printed the shape, obtaining that it is a 1D tensor. I'm trying to import a folder containing all caddies images to check whether the NN catalogs them correctly or not. The problem is that I can't obtain a 1D tensor from all these images. I've tried to convert each of them with numpy.asarray(my_image) . I also tried to convert the whole list but turns out it became a tuple... any hint?

train_label = 'Caddies'
train_images = list()

for filename in listdir('/content/drive/MyDrive/Data set/Caddies/'):
    img_data = image.imread('/content/drive/MyDrive/Data set/Caddies/' +\
                       filename)
    img_data = np.asarray(img_data)
    #print(str(img_data.dtype) + str(img_data.shape))
    train_images.append(img_data)

    print('> loaded %s images' % (len(train_images)))

    train_images = np.array(train_images)

    print(train_images.shape())

If you want to feed your images to a neural network, I suggest you don't use a for-loop to load all your images in memory. Rather, you can use this function:

import tensorflow as tf
import os
os.chdir('pictures')

files = tf.data.Dataset.list_files('*jpg')

def load_images(path):
    image = tf.io.read_file(path)
    image = tf.io.decode_jpeg(image)
    image = tf.image.convert_image_dtype(image, tf.float32) # optional
    image = tf.image.resize(image, (224, 224))              # optional
    return image 

ds = files.map(load_images).batch(1)

next(iter(ds)).shape
(1, 224, 224, 3)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM