简体   繁体   English

我的神经网络模型有什么问题?

[英]What is wrong with my neural network model?

I got a dataset of 178 elements, and each contains 13 features and 1 label. 我得到了178个元素的数据集,每个元素包含13个要素和1个标签。 Label is stored as one-hot array. 标签存储为一热阵列。 My training dataset is made of 158 elements. 我的训练数据集由158个元素组成。

Here is what my model looks like : 这是我的模型的样子:

x = tf.placeholder(tf.float32, [None,training_data.shape[1]])
y_ = tf.placeholder(tf.float32, [None,training_data_labels.shape[1]])

node_1 = 300
node_2 = 300
node_3 = 300
out_n = 3   

#1
W1 = tf.Variable(tf.random_normal([training_data.shape[1], node_1]))
B1 = tf.Variable(tf.random_normal([node_1]))
y1 = tf.add(tf.matmul(x,W1),B1)
y1 = tf.nn.relu(y1)

#2
W2 = tf.Variable(tf.random_normal([node_1, node_2]))
B2 = tf.Variable(tf.random_normal([node_2]))
y2 = tf.add(tf.matmul(y1,W2),B2)
y2 = tf.nn.relu(y2)

#3
W3 = tf.Variable(tf.random_normal([node_2, node_3]))
B3 = tf.Variable(tf.random_normal([node_3]))
y3 = tf.add(tf.matmul(y2,W3),B3)
y3 = tf.nn.relu(y3)

#output
W4 = tf.Variable(tf.random_normal([node_3, out_n]))
B4 = tf.Variable(tf.random_normal([out_n]))
y4 = tf.add(tf.matmul(y3,W4),B4)
y = tf.nn.softmax(y4)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(200):
        sess.run(optimizer,feed_dict={x:training_data, y_:training_data_labels})

    correct = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
    print('Accuracy:',accuracy.eval({x:eval_data, y_:eval_data_labels}))

But the accuracy is very low, i tried increase the range 200 to some higher number but it still remains low. 但是准确度很低,我尝试将范围200增加到更高的数值,但仍然很低。

What could I do to improve the results ? 我该怎么做才能改善结果?

The problem is that you're taking the softmax of y4 and then passing that to tf.nn.softmax_cross_entropy_with_logits . 问题是您要获取y4的softmax,然后将其传递给tf.nn.softmax_cross_entropy_with_logits This error is common enough that there's actually a note about it in the documentation for softmax_cross_entropy_with_logits : 此错误非常普遍,因此在softmax_cross_entropy_with_logits的文档中实际上对此有一个注释:

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally 
for efficiency. Do not call this op with the output of softmax, as it will produce 
incorrect results.

The rest of your code looks fine, so just replace y4 with y and get rid of y = tf.nn.softmax(y4) . 您的其余代码看起来不错,因此只需将y4替换为y并摆脱y = tf.nn.softmax(y4)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM