简体   繁体   English

在keras中创建自定义生成器以进行预测

[英]Making a custom generator in keras for prediction

I am working on doing prediction for my large database of ~1 million images. 我正在为约100万张图像的大型数据库做预测。 For each image, I have code that can chop the image up into ~200 smaller images and pass them into keras as a numpy array for prediction. 对于每个图像,我都有一些代码,可以将图像切成约200个较小的图像,并将它们作为一个numpy数组传递给keras,以进行预测。

I want to avoid unnecessary reading and writing to the hard drive, so I don't want to save all these smaller images and use flow_from_directory. 我想避免不必要的读写操作,所以我不想保存所有这些较小的图像并使用flow_from_directory。 Instead, I am looking to read in an image, chop it up with my existing code, and pass the smaller images into my network as a batch all in memory, and then repeat this process for many images. 取而代之的是,我希望读入图像,将其与现有代码一起切碎,并将较小的图像作为批处理全部传递到我的网络中,然后对许多图像重复此过程。

Is this something Keras can handle? 这是Keras可以处理的吗? If so, I suspect I will need to make my own custom generator, but I'm not sure how to do this, and I couldn't find any good examples. 如果是这样,我怀疑我需要创建自己的自定义生成器,但是我不确定如何执行此操作,因此找不到任何好的示例。 Does anyone have an example of how to implement a custom generator? 有没有人提供有关如何实现自定义生成器的示例?

Try something like this: 尝试这样的事情:

dpath='path to test folder'
ids=os.listdir(dpath+"test/")
for id in ids:
    x_batch=[]
    img = cv2.imread(dpath+'test/{}.jpg'.format(id))  #jpg if image in jpg format
    img = cv2.resize(img, (224, 224), interpolation = cv2.INTER_CUBIC) #if resize is needed

    chopped_image= your code that chops image
        x_batch.append(chopped_image)

    x_batch = np.array(x_batch, np.float32) 
    preds=(model.predict_on_batch(x_batch))
    if first==1:
        predsA=preds.copy()
        first=0
    else:
        predsA=np.append(predsA,preds,axis=0)     

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM