简体   繁体   English

Keras输入/输出

[英]Keras Input/Output

I am struggling with a challenge in Tensorflow / keras, would be great if someone could help me. 我在Tensorflow / keras中面临挑战,如果有人可以帮助我,那将很棒。

I have build a neural net in Keras with input_dim=3, then 10 Neurons and Output 1. 我在Keras中建立了一个具有input_dim = 3,然后是10个神经元和输出1的神经网络。

The input is a 3d-vector with floats, the output should be a simple float value. 输入是一个带有浮点数的3d向量,输出应该是一个简单的浮点值。

My problem is, that I dont know how the floats should be formatted (>1, from 0 to 1?, etc...) and which loss function could work out for this task (nothing binary i guess). 我的问题是,我不知道浮点数应如何设置格式(> 1,从0到1 ?,等等),以及哪个损失函数可以解决此任务(我猜没有二进制)。 I want the neural net to compute out of the 3d vector a simple float value. 我希望神经网络从3d向量中计算出一个简单的浮点值。 But it never works out because my outputs are always the same. 但这永远不会成功,因为我的输出总是相同的。

If I have forgotten something please let me know, if you have some ideas to it, it would be great! 如果我忘记了一些东西,请告诉我,如果您有一些想法,那就太好了!

Greetings 问候

Edit: Im aware of the fact that I need an introduction into the whole topic of machine learning, which I am doing right now. 编辑:我意识到我需要对机器学习的整个主题进行介绍,这一点我现在正在做。 In the mean time I would like to know how to use keras to verifiy/practically use machine learning. 同时,我想知道如何使用keras来验证/实际使用机器学习。 I am sorry for asking 'stupid' questions but I hope that maybe someone could help me. 我很抱歉提出“愚蠢”的问题,但我希望也许有人可以帮助我。

Input: I think the input might be 'wrong' formatted, its not normalized etc., but I transformed the values i get to an interval mentioned below. 输入:我认为输入的格式可能是“错误”,未标准化等,但是我将值转换为下面提到的间隔。

This is my simple model: 这是我的简单模型:

model = Sequential()
model.add(Dense(10, input_dim=3, init='normal', activation='sigmoid'))
model.add(Dense(1, init='normal', activation='sigmoid'))
model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
model.fit(X_Train, Y_Train, nb_epoch=100, batch_size=32, verbose=1)

X_Train and Y_Train are values extracted from a .csv file. X_Train和Y_Train是从.csv文件提取的值。 For example my input values are [a,b,c,d], where 0 < a,b,c < 1 and -1 < d < 1 (d is output). 例如,我的输入值为[a,b,c,d],其中0 <a,b,c <1和-1 <d <1(输出d)。

Output: 输出:

Epoch 500/500 时代500/500

32/32 [==============================] - 0s - loss: 0.0813 - acc: 0.0000e+00 32/32 [==============================]-0s-损失:0.0813-acc:0.0000e + 00

Example (random generated values), all output is nearly the same around 0.43: 示例(随机生成的值),所有输出在0.43附近几乎相同:

[ 0.97650245 0.30383579 0.74829968] [[ 0.43473071]] [0.97650245 0.30383579 0.74829968] [[0.43473071]]

[ 0.94985165 0.75347051 0.72609185] [[ 0.43473399]] [0.94985165 0.75347051 0.72609185] [[0.43473399]]

[ 0.18072594 0.18540003 0.20763266] [[ 0.43947196]] [0.18072594 0.18540003 0.20763266] [[0.43947196]]

Firstly, there is no need for normalizing (or formatting) the input samples. 首先,不需要标准化(或格式化)输入样本。

Secondly, for the problem of zero accuracy, it's because you used the "accuracy" as metrics which is used for classification model. 其次,对于零精度问题,这是因为您使用了“准确性”作为用于分类模型的度量。 In your case, you should use something like "mse" or "mae" (in Keras) as metrics in your compile method, eg, 在您的情况下,应在编译方法中使用“ mse”或“ mae”(在Keras中)之类的指标,例如

model.compile(loss='mse', optimizer='sgd', metrics=['mae'])

I'm answering my own question: 我在回答自己的问题:

The problem here is the optimizer! 这里的问题是优化器! The training data and every other setting is not that important. 训练数据和其他所有设置都不那么重要。 You have to try other optimizers, to vary the results. 您必须尝试其他优化器才能改变结果。 Its possible to close this question now. 现在可以结束这个问题了。 Thank you for your help! 谢谢您的帮助!

Change your output layer to following model.add(Dense(1)) 将输出层更改为以下model.add(Dense(1))

See this regression guide that talks about one output. 请参阅此回归指南,它讨论一个输出。 https://www.tensorflow.org/tutorials/keras/basic_regression https://www.tensorflow.org/tutorials/keras/basic_regression

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM