簡體   English   中英

我如何修復網絡輸入形狀

[英]How can i fix network input shape

有 1275 張圖像,每張圖像的維度為 (128,19,1)。 這些圖像被分成五組,所以有 255 (1275/5) 個樣本,每個樣本有 5 張圖像,數據的最終形狀是 (255, 5, 128, 19, 1)。 此數據必須提供給 CONVLSTM2D.network,其代碼如下。 培訓過程已完全完成,但在評估過程開始時出現以下錯誤。 謝謝,如果有人可以幫助我修復它。

錯誤:

IndexError:列表索引超出范圍

文件“”,第 1 行,在 runfile('D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py', wdir='D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction /stft_feature/Training_set/P300')

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\spyder_kernels\customize\spydercustomize.py”,第 786 行,在運行文件 execfile(文件名,命名空間)中

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\spyder_kernels\customize\spydercustomize.py”,第 110 行,在 execfile exec(compile(f.read() , 文件名, 'exec'), 命名空間)

文件“D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py”,第 111 行,在 test_loss 中,test_acc = seq.evaluate(test_data)

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\keras\engine\training.py”,第 1361 行,在 evaluate callbacks=callbacks 中)

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\keras\engine\training_arrays.py”,第 403 行,在 test_loop if issparse(ins[i]) 和不是 K.is_sparse(feed[i]):

IndexError:列表索引超出范圍

#Importing libraries
#-------------------------------------------------
from PIL import Image
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import os
from matplotlib import pyplot as plt


#Data Preprocessing
#-----------------------------------------------------------------
Data = np.zeros((255,5,128,19,1),dtype=np.uint8)

image_folder = 'D:\\thesis\\Paper 3\\Feature Extraction\\two_dimension_Feature_extraction\\stft_feature\\Training_set\\P300'
images = [img for img in os.listdir(image_folder) if img.endswith(".png")]

for image in images:
    img = Image.open(image).convert('L')
    array = np.array(img)
    array = np.expand_dims(np.array(img), axis=2)
    for i in range(0, len(Data)):
        for j in range(0, 4):
            Data[i,j] = array

           

labels = np.zeros((2,len(Data)), dtype=np.uint8)
labels = np.transpose(labels)
for i in range(0, len(Data) ):
    if i <= 127:
        labels[i][0] = 1
    elif i > 127 :
        labels[i][1] = 1            
            
#Network Configuration
#--------------------------------------------------------------------------------------------------------------------------
seq = Sequential()
seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(5, 128, 19, 1),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(Flatten())
seq.add(Dense(output_dim = 128, activation = 'relu'))
seq.add(Dense(output_dim = 2, activation = 'relu'))
seq.compile(loss='binary_crossentropy', optimizer='adadelta', metrics=['acc'])

#Fit the Data on Model
#--------------------------------------------------------------------------------------
train_data_1 = Data[0:84]
train_data_2 = Data[127:212]
train_data = np.concatenate([train_data_1, train_data_2])
label_train_1 = labels[0:84]
label_train_2 = labels[127:212]
label_train = np.concatenate([label_train_1, label_train_2])

val_data_1 = Data[84:104]
val_data_2 = Data[212:232]
val_data = np.concatenate([val_data_1, val_data_2])
label_val_1 = labels[84:104]
label_val_2 = labels[212:232]
label_val = np.concatenate([label_val_1, label_val_2])


test_data_1 = Data[104:127]
test_data_2 = Data[232:]
test_data = np.concatenate([test_data_1, test_data_2])
label_test_1 = labels[104:127]
label_test_2 = labels[232:]
label_test = np.concatenate([label_test_1, label_test_2])


history = seq.fit(train_data,label_train, validation_data=( val_data, label_val), epochs = 2 , batch_size = 10)

#Visualize the Result
#---------------------------------------------------------------------------------------
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.plot()
plt.legend()
plt.show()
#Evaluate Model on test Data
#----------------------------------------------------------------------------------------------
test_loss, test_acc = seq.evaluate(test_data)
print('test_acc:', test_acc)





     

問題在最后,當你評估你的 model 時,你只是忘了給出y參數。 此修改應解決問題:

test_loss, test_acc = seq.evaluate(test_data, y=label_test)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM