简体   繁体   English

Keras Model.predict 返回错误“矩阵大小不兼容”

[英]Keras Model.predict returns the error 'Matrix size-incompatible'

I'm trying to use model.predict function for a keras NN model but it returns me the 'Matrix size-incompatible' error everytime. I'm trying to use model.predict function for a keras NN model but it returns me the 'Matrix size-incompatible' error everytime. My training, validation and test dataset is based on 10 samples of 31 inputs and 45 targets.我的训练、验证和测试数据集基于 31 个输入和 45 个目标的 10 个样本。 I'm trying to make predictions for 4 different input arrays (31 features).我正在尝试对 4 个不同的输入 arrays(31 个特征)进行预测。 Any suggestions?有什么建议么?

My code:我的代码:

import numpy as np
import tensorflow as tf
from tensorflow import keras

npz=np.load('coal_data_mass_train.npz')
train_inputs = npz['inputs'].astype(np.float)
train_targets = npz['targets'].astype(np.int)

npz = np.load('coal_data_mass_validation.npz')
validation_inputs, validation_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int)

npz = np.load('coal_data_mass_test.npz')
test_inputs, test_targets = npz['inputs'].astype(np.float), npz['targets'].astype(np.int)

input_size = 31
output_size = 45
hidden_layer_size = 3
model = tf.keras.Sequential([
    tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
    tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
    tf.keras.layers.Dense(output_size, activation='linear') 
])

model.compile(optimizer='sgd', loss='mean_absolute_error', metrics=['MeanAbsoluteError','mse'])

batch_size = 10
max_epochs = 20
callback=tf.keras.callbacks.EarlyStopping(patience=2)
model.fit(train_inputs, 
          train_targets, 
          batch_size=10, 
          epochs=max_epochs, 
          verbose=2,
          callbacks=[callback],
          validation_data=(validation_inputs, validation_targets)
          )
test_loss= model.evaluate(test_inputs, test_targets)
Output: 2/2 [==============================] - 0s 0s/sample - loss: 0.3248 - mean_absolute_error: 0.3248 - mean_squared_error: 0.3372

input_data=np.loadtxt('inputs_data.csv',delimiter=',')
first_x=input_data[0,:]
second_x=input_data[1,:]
third_x=input_data[2:]
forth_x=input_data[3,:]

first_y=model.predict(first_x, batch_size=1)
print(first_y.shape)

   Output:
    ---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-22-3fc986b65e7b> in <module>
----> 1 first_y=model.predict(first_x, batch_size=1)
      2 print(first_y.shape)

~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1076           verbose=verbose,
   1077           steps=steps,
-> 1078           callbacks=callbacks)
   1079 
   1080   def reset_metrics(self):

~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    361 
    362         # Get outputs.
--> 363         batch_outs = f(ins_batch)
    364         if not isinstance(batch_outs, list):
    365           batch_outs = [batch_outs]

~\anaconda3\lib\site-packages\tensorflow\python\keras\backend.py in __call__(self, inputs)
   3290 
   3291     fetched = self._callable_fn(*array_vals,
-> 3292                                 run_metadata=self.run_metadata)
   3293     self._call_fetch_callbacks(fetched[-len(self._fetches):])
   3294     output_structure = nest.pack_sequence_as(

~\anaconda3\lib\site-packages\tensorflow\python\client\session.py in __call__(self, *args, **kwargs)
   1456         ret = tf_session.TF_SessionRunCallable(self._session._session,
   1457                                                self._handle, args,
-> 1458                                                run_metadata_ptr)
   1459         if run_metadata:
   1460           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

InvalidArgumentError: Matrix size-incompatible: In[0]: [1,1], In[1]: [31,3]
     [[{{node sequential_4/dense_12/Relu}}]]

train_inputs shape: 31 features and 6 samples; train_inputs 形状:31 个特征和 6 个样本; train_targets shape: 45 features and 6 samples; train_targets shape:45 个特征和 6 个样本; first_x shape: 31 features (1 row) first_x 形状:31 个特征(1 行)

The code for dataset pretreatment:数据集预处理代码:

import numpy as np
from sklearn import preprocessing
data=np.loadtxt('dataset_mass.csv',delimiter=',')
unscaled_inputs=data[:,0:31]
unscaled_targets=data[:,38:83]
scaled_inputs = preprocessing.scale(unscaled_inputs)
scaled_targets=preprocessing.scale(unscaled_targets)
samples_count=scaled_inputs.shape[0]
train_samples_count=int(0.6*samples_count)
validation_samples_count=int(0.2*samples_count)
test_samples_count=samples_count-train_samples_count-validation_samples_count

train_inputs=scaled_inputs[:train_samples_count]
train_targets=scaled_targets[:train_samples_count]

validation_inputs=scaled_inputs[train_samples_count:train_samples_count+validation_samples_count]
validation_targets=scaled_targets[train_samples_count:train_samples_count+validation_samples_count]

test_inputs=scaled_inputs[train_samples_count+validation_samples_count:]
test_targets=scaled_targets[train_samples_count+validation_samples_count:]

print(np.sum(train_targets), train_samples_count, np.sum(train_targets) / train_samples_count)
print(np.sum(validation_targets), validation_samples_count, np.sum(validation_targets) / validation_samples_count)
print(np.sum(test_targets), test_samples_count, np.sum(test_targets) / test_samples_count)

OUT: 13.960026602768515 6 2.3266711004614193 4.3277928536591395 2 2.1638964268295697 -18.287819456427652 2 -9.143909728213826输出:13.960026602768515 6 2.3266711004614193 4.3277928536591395 2 2.1638964268295697 -18.287819456427652 2 -9.143909728213826

np.savez('coal_data_mass_train', inputs=train_inputs, targets=train_targets)
np.savez('coal_data_mass_validation', inputs=validation_inputs, targets=validation_targets)
np.savez('coal_data_mass_test', inputs=test_inputs, targets=test_targets)

The dataset_mass.CVS file from which I import my data contain 83 columns (some of which I do not use) and 10 rows.我从中导入数据的 dataset_mass.CVS 文件包含 83 列(其中一些我不使用)和 10 行。

Try this out while defining the model:在定义 model 时试试这个:

model = tf.keras.Sequential([
    tf.keras.layers.Dense(hidden_layer_size,input_shape=(31,), activation='relu'),
    tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
    tf.keras.layers.Dense(output_size, activation='linear') 
])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM