[英]ValueError: Layer model expects 21 input(s), but it received 1 input tensors
I am beginner to Keras.我是 Keras 的初学者。 I am following this example where I am training binary classification model using Keras where input data is structured data taken from a csv, but I am getting following error
我正在关注这个示例,我正在使用 Keras 训练二进制分类 model,其中输入数据是从 csv 获取的结构化数据,但我收到以下错误
ValueError: Layer model expects 21 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=float64>]
at line在线
score = model.evaluate(x=test_labels, y=test_data, verbose=1)
My code look like below我的代码如下所示
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental.preprocessing import Normalization
def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
labels = dataframe.pop("label")
labels = np.asarray(labels).astype('float32').reshape((-1, 1))
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
os.chdir('datasets/accelerometer/accel-labeled/')
df = pd.read_csv("combined-all-labeled.csv", delimiter=',')
print(f"All size : {df.shape[0]}")
np.random.seed(23)
perm = np.random.permutation(df.index)
m = len(df.index)
train_end = int(.70 * m)
validate_end = int(.25 * m) + train_end
train_ds = dataframe_to_dataset(df.iloc[perm[:train_end]])
validate_ds = dataframe_to_dataset(df.iloc[perm[train_end:validate_end]])
test_df = df.iloc[perm[validate_end:]]
test_labels = test_df['label'].astype('float')
test_data = test_df.iloc[:, 2:22].astype('float')
print(test_data.head(2))
print(f"Train Set size : {len(train_ds)}")
print(f"Validation Set size : {len(validate_ds)}")
print(f"Test Set size : {len(test_df)}")
def encode_numerical_feature(feature, name, dataset):
# Create a Normalization layer for our feature
normalizer = Normalization()
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))
# Learn the statistics of the data
normalizer.adapt(feature_ds)
# Normalize the input feature
encoded_feature = normalizer(feature)
return encoded_feature
# Numerical features
x_mean = keras.Input(shape=(1,), name="x_mean")
x_median = keras.Input(shape=(1,), name="x_median")
x_std_dev = keras.Input(shape=(1,), name="x_stdev")
x_raw_min = keras.Input(shape=(1,), name="x_raw_min")
x_raw_max = keras.Input(shape=(1,), name="x_raw_max")
x_abs_min = keras.Input(shape=(1,), name="x_abs_min")
x_abs_max = keras.Input(shape=(1,), name="x_abs_max")
y_mean = keras.Input(shape=(1,), name="y_mean")
y_median = keras.Input(shape=(1,), name="y_median")
y_std_dev = keras.Input(shape=(1,), name="y_stdev")
y_raw_min = keras.Input(shape=(1,), name="y_raw_min")
y_raw_max = keras.Input(shape=(1,), name="y_raw_max")
y_abs_min = keras.Input(shape=(1,), name="y_abs_min")
y_abs_max = keras.Input(shape=(1,), name="y_abs_max")
z_mean = keras.Input(shape=(1,), name="z_mean")
z_median = keras.Input(shape=(1,), name="z_median")
z_std_dev = keras.Input(shape=(1,), name="z_stdev")
z_raw_min = keras.Input(shape=(1,), name="z_raw_min")
z_raw_max = keras.Input(shape=(1,), name="z_raw_max")
z_abs_min = keras.Input(shape=(1,), name="z_abs_min")
z_abs_max = keras.Input(shape=(1,), name="z_abs_max")
all_inputs = [
x_mean,
x_median,
x_std_dev,
x_raw_min,
x_raw_max,
x_abs_min,
x_abs_max,
y_mean,
y_median,
y_std_dev,
y_raw_min,
y_raw_max,
y_abs_min,
y_abs_max,
z_mean,
z_median,
z_std_dev,
z_raw_min,
z_raw_max,
z_abs_min,
z_abs_max,
]
# Numerical features
x_mean_encoded = encode_numerical_feature(x_mean, "x_mean", train_ds)
x_median_encoded = encode_numerical_feature(x_median, "x_median", train_ds)
x_std_dev_encoded = encode_numerical_feature(x_std_dev, "z_stdev", train_ds)
x_raw_min_encoded = encode_numerical_feature(x_raw_min, "x_raw_min", train_ds)
x_raw_max_encoded = encode_numerical_feature(x_raw_max, "x_raw_max", train_ds)
x_abs_min_encoded = encode_numerical_feature(x_abs_min, "x_abs_min", train_ds)
x_abs_max_encoded = encode_numerical_feature(x_abs_max, "x_abs_max", train_ds)
y_mean_encoded = encode_numerical_feature(y_mean, "y_mean", train_ds)
y_median_encoded = encode_numerical_feature(y_median, "y_median", train_ds)
y_std_dev_encoded = encode_numerical_feature(y_std_dev, "z_stdev", train_ds)
y_raw_min_encoded = encode_numerical_feature(y_raw_min, "y_raw_min", train_ds)
y_raw_max_encoded = encode_numerical_feature(y_raw_max, "y_raw_max", train_ds)
y_abs_min_encoded = encode_numerical_feature(y_abs_min, "y_abs_min", train_ds)
y_abs_max_encoded = encode_numerical_feature(y_abs_max, "y_abs_max", train_ds)
z_mean_encoded = encode_numerical_feature(z_mean, "z_mean", train_ds)
z_median_encoded = encode_numerical_feature(z_median, "z_median", train_ds)
z_std_dev_encoded = encode_numerical_feature(z_std_dev, "z_stdev", train_ds)
z_raw_min_encoded = encode_numerical_feature(z_raw_min, "z_raw_min", train_ds)
z_raw_max_encoded = encode_numerical_feature(z_raw_max, "z_raw_max", train_ds)
z_abs_min_encoded = encode_numerical_feature(z_abs_min, "z_abs_min", train_ds)
z_abs_max_encoded = encode_numerical_feature(z_abs_max, "z_abs_max", train_ds)
all_features = layers.concatenate(
[
x_mean_encoded,
x_median_encoded,
x_std_dev_encoded,
x_raw_min_encoded,
x_raw_max_encoded,
x_abs_min_encoded,
x_abs_max_encoded,
y_mean_encoded,
y_median_encoded,
y_std_dev_encoded,
y_raw_min_encoded,
y_raw_max_encoded,
y_abs_min_encoded,
y_abs_max_encoded,
z_mean_encoded,
z_median_encoded,
z_std_dev_encoded,
z_raw_min_encoded,
z_raw_max_encoded,
z_abs_min_encoded,
z_abs_max_encoded,
]
)
x = layers.Dense(32, activation="relu")(all_features)
x = layers.Dropout(0.5)(x)
output = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(all_inputs, output)
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
# `rankdir='LR'` is to make the graph horizontal.
keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
# model.summary()
# Train model
model.fit(train_ds, epochs=1, validation_data=validate_ds)
score = model.evaluate(x=test_labels, y=test_data, verbose=1)
#
print('Test loss: ', score[0])
print('Test accuracy: ', score[1])
CSV look like below CSV 如下所示
group_timestamp,label,x_mean,x_median,x_stdev,x_raw_min,x_raw_max,x_abs_min,x_abs_max,y_mean,y_median,y_stdev,y_raw_min,y_raw_max,y_abs_min,y_abs_max,z_mean,z_median,z_stdev,z_raw_min,z_raw_max,z_abs_min,z_abs_max
2017-05-02 17:35:20,0,-8.40793368,-8.432378499999999,0.0812278134949539,-8.632295,-8.24563,8.24563,8.632295,-180900768.0,-180900768.0,0.0,-180900768.0,-180900768.0,180900768.0,180900768.0,180900768.0,180900768.0,0.0,180900768.0,180900768.0,180900768.0,180900768.0
2017-05-02 17:19:40,0,1.96025263,1.9716251,0.0710845152401064,1.816002,2.112883,1.816002,2.112883,-180900768.0,-180900768.0,0.0,-180900768.0,-180900768.0,180900768.0,180900768.0,180900752.0,180900752.0,0.0,180900752.0,180900752.0,180900752.0,180900752.0
...
How could I fix this?我该如何解决这个问题?
Additionally any other input fix/improve code/model are welcome!此外,欢迎任何其他输入修复/改进代码/模型!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.