简体   繁体   中英

Keras, memoryerror - data = data.astype(“float”) / 255.0. Unable to allocate 309. MiB for an array with shape (13165, 32, 32, 3)

I am currently working on the Smiles dataset and then applying Deep learning to detect the smile as positive or negative. The machine I am using is Raspberry Pi 3, and the version of Python to facilitate this program is 3.7 (not 2.7)

I have a total of 13165 images in the training set. I would like to store it into an array. However, I came across a problem, which is to allocate an array with shape (13165, 32, 32, 3).

The following shows the source code ( shallownet_smile.py ):

from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from pyimagesearch.preprocessing import ImageToArrayPreprocessor
from pyimagesearch.preprocessing import SimplePreprocessor
from pyimagesearch.datasets import SimpleDatasetLoader
from pyimagesearch.nn.conv.shallownet import ShallowNet
from keras.optimizers import SGD
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse


ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True, help="path to input dataset")
args = vars(ap.parse_args())

# grab the list of images we'll be describing
print("[INFO] loading images...")

imagePaths = list(paths.list_images(args["dataset"]))

sp = SimplePreprocessor(32, 32)
iap = ImageToArrayPreprocessor()

sdl = SimpleDatasetLoader(preprocessors=[sp, iap])
(data, labels) = sdl.load(imagePaths, verbose=1)
# convert values to between 0-1
data = data.astype("float") / 255.0

# partition our data into training and test sets
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.25,
    random_state=42)

# convert the labels from integers to vectors
trainY = LabelBinarizer().fit_transform(trainY)
testY = LabelBinarizer().fit_transform(testY)

# initialize the optimizer and model
print("[INFO] compiling model...")

# initialize stochastic gradient descent with learning rate of 0.005
opt = SGD(lr=0.005)

model = ShallowNet.build(width=32, height=32, depth=3, classes=2)
model.compile(loss="categorical_crossentropy", optimizer=opt,
    metrics=["accuracy"])

# train the network
print("[INFO] training network...")

H = model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=32,
    epochs=100, verbose=1)

print("[INFO] evaluating network...")

predictions = model.predict(testX, batch_size=32)

print(classification_report(
    testY.argmax(axis=1),
    predictions.argmax(axis=1),
    target_names=["positive", "negative"]
))

plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, 100), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, 100), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, 100), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, 100), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.show()

Assume that the dataset is in my current directory. The following is the error I obtained:

python3 shallownet_smile.py -d=datasets/Smiles

error message

I am stil feeling confused what is wrong. I would highly appreciate any expert or those who have experienced in deep learning/machine learning to explain and clarify to me what I am doing wrong.

Thank you for your help and attention.

First of all, you have a system with very low memory so try with smaller images.

The error originates mainly in this line data = data.astype("float") / 255.0

The reason is data is already a uint8 numpy array, and on top of that you're creating a float32 numpy array which will take additional memory.

I am changing some parts of the simpledataloader so that you can train.

Go to from pyimagesearch.datasets import SimpleDatasetLoader . It should be inside folder pyimagesearch/datasets/simpledatasetloader.py (sample code: https://github.com/whydna/Deep-Learning-For-Computer-Vision/blob/master/pyimagesearch/datasets/simpledatasetloader.py )

Change this.py file with my code and change the value of max_image (reudce it unless you can work with the memory you have), also remove the line data = data.astype("float") / 255.0 , as I'm sending the pre-processed array from the function.

# import the necessary packages
import numpy as np
import cv2
import os

max_image = 1000

class SimpleDatasetLoader:
    def __init__(self, preprocessors=None):
        # store the image preprocessor
        self.preprocessors = preprocessors

        # if the preprocessors are None, initialize them as an
        # empty list
        if self.preprocessors is None:
            self.preprocessors = []

    def load(self, imagePaths, verbose=-1):
        # initialize the list of features and labels
        data = []
        labels = []
        cnt = 0

        # loop over the input images
        for (i, imagePath) in enumerate(imagePaths):
            if cnt >= max_image:
                break
            # load the image and extract the class label assuming
            # that our path has the following format:
            # /path/to/dataset/{class}/{image}.jpg
            image = cv2.imread(imagePath)
            label = imagePath.split(os.path.sep)[-2]

            # check to see if our preprocessors are not None
            if self.preprocessors is not None:
                # loop over the preprocessors and apply each to
                # the image
                for p in self.preprocessors:
                    image = p.preprocess(image)

            # treat our processed image as a "feature vector"
            # by updating the data list followed by the labels
            data.append(image)
            labels.append(label)

            # show an update every `verbose` images
            cnt += 1
            if verbose > 0 and i > 0 and (i + 1) % verbose == 0:
                print("[INFO] processed {}/{}".format(i + 1,
                    len(imagePaths)))

        # return a tuple of the data and labels
        return (np.array(data, dtype='float32')/255., np.array(labels))

If you still have memory issues, reduce batch_size in here

H = model.fit(trainX, trainY, validation_data=(testX, testY), batch_size=4,
    epochs=100, verbose=1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM