[英]How can I correct the dimension error I keep getting when I train RNN using keras library?
[英]How to train NLP classification using keras library?
这是我的训练数据,我想使用keras库通过X_data预测'y'。 我已经有很多时间出错了,我知道它的数据形状,但是我被卡了一段时间。 希望你们能提供帮助。
X_data =
0 [construction, materials, labour, charges, con...
1 [catering, catering, lunch]
2 [passenger, transport, local, transport, passe...
3 [goods, transport, road, transport, goods, inl...
4 [rental, rental, aircrafts]
5 [supporting, transport, cargo, handling, agenc...
6 [postal, courier, postal, courier, local, deli...
7 [electricity, charges, reimbursement, electric...
8 [facility, management, facility, management, p...
9 [leasing, leasing, aircrafts]
10 [professional, technical, business, selling, s...
11 [telecommunications, broadcasting, information...
12 [support, personnel, search, contract, tempora...
13 [maintenance, repair, installation, maintenanc...
14 [manufacturing, physical, inputs, owned, other...
15 [accommodation, hotel, accommodation, hotel, i...
16 [leasing, rental, leasing, renting, motor, veh...
17 [real, estate, rental, leasing, involving, pro...
18 [rental, transport, vehicles, rental, road, ve...
19 [cleaning, sanitary, pad, vending, machine]
20 [royalty, transfer, use, ip, intellectual, pro...
21 [legal, accounting, legal, accounting, legal, ...
22 [veterinary, clinic, health, care, relation, a...
23 [human, health, social, care, inpatient, medic...
Name: Data, dtype: object
这是我的训练预测指标
y =
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 1
9 1
10 1
11 1
12 1
13 1
14 1
15 10
16 2
17 10
18 2
19 2
20 10
21 10
22 10
23 10
我正在使用此模型:
top_words = 5000
length= len(X_data)
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(embedding_vecor_length, top_words, input_length=length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_data, y, epochs=3, batch_size=32)
ValueError: Error when checking input: expected embedding_8_input to have shape (None, 24) but got array with shape (24, 1)
在此模型中使用此数据有什么问题? 我想使用输入X_data预测“ y”吗?
您需要将熊猫数据帧转换为numpy数组,这些数组将变得参差不齐,因此需要填充它们。 您还需要设置单词向量词典,因为您不能仅仅将单词直接传递到神经网络中。 这里 , 这里和这里都有一些例子。 您将需要在这里进行自己的研究,对您提供的数据样本做不到很多
length = len(X_data)
是您拥有多少数据样本,keras不在乎,它想知道您有多少个单词作为输入,(每个单词必须相同,这就是为什么填充是如前所述)
因此您输入到网络中的就是您有多少列
#assuming you converted X_data correctly to numpy arrays and word vectors
model.add(Embedding(embedding_vecor_length, top_words, input_length=X_data.shape[1]))
您的分类值必须为二进制。
from keras.utils import to_categorical
y = to_categorical(y)
现在,您的最后一个密集层是10,假设您有10个类别,并且针对mulitclass问题的正确激活是softmax
model.add(Dense(10, activation='softmax'))
您的损失现在必须是categorical_crossentropy
,因为这是多类的
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.