简体   繁体   English

CSV >> Tensorflow >> 回归(通过神经网络)模型

[英]CSV >> Tensorflow >> regression (via neural network) model

Endless Googling has left me better educated on Python and numpy, but still clueless on solving my task.无尽的谷歌搜索让我在 Python 和 numpy 方面得到了更好的教育,但在解决我的任务方面仍然一无所知。 I want to read a CSV of integer/floating point values and predict a value using a neural network.我想读取整数/浮点值的 CSV 并使用神经网络预测值。 I have found several examples that read the Iris dataset and do classification, but I don't understand how to make them work for regression.我发现了几个读取 Iris 数据集并进行分类的示例,但我不明白如何使它们用于回归。 Can someone help me connect the dots?有人可以帮我连接点吗?

Here is one line of the input:这是输入的一行:

16804,0,1,0,1,1,0,1,0,1,0,1,0,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0,0,0,1,1,0,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 16804,0,1,0,1,1,0,1,0,1,0,1,0,0,1,1,0,0,1,0,1,0,1,0,1, 0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,1,0,0,1,0,1,0,1,0, 1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1, 0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,1, 0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1, 0,1,0,1,0,1,1,0,0,1,0,1,0,1,0,1,0,1,0,1,1,0,0,1,0, 0,0,1,1,0,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0, 0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,1,0,0,0,1,0,0,0,1,0, 0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1, 0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0, 0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.490265,0.620805,0.54977,0.869299,0.422268,0.351223,0.33572,0.68308,0.40455,0.47779,0.307628,0.301921,0.318646,0.365993,6135.81 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0.490265,0.620805,0.54977,0.869299,0.422268,0.351223,0.33572,0.68308,57040.8,5730.84,873.840.84 0.365993,6135.81

That should be 925 values.那应该是 925 个值。 The last column is the output.最后一列是输出。 The first is the RowID.第一个是 RowID。 Most are binary values because I've already done one-hot encoding.大多数是二进制值,因为我已经完成了 one-hot 编码。 The test files do not have the output/last column.测试文件没有输出/最后一列。 The full training file has around 10M rows.完整的训练文件大约有 1000 万行。 A general MxN solution will do.一般的 MxN 解决方案就可以了。

Edit: Let's use this sample data since Iris is a classification problem, but note that the above is my real target.编辑:让我们使用此示例数据,因为 Iris 是一个分类问题,但请注意,以上是我的真正目标。 I removed the ID column.我删除了 ID 列。 Let's predict the last column given the 6 other columns.让我们在给定其他 6 列的情况下预测最后一列。 This has 45 rows.这有 45 行。 (src: http://www.stat.ufl.edu/~winner/data/civwar2.dat ) (来源: http : //www.stat.ufl.edu/~winner/data/civwar2.dat

100,1861,5,2,3,5,38 112,1863,11,7,4,59.82,15.18 113,1862,34,32,1,79.65,2.65 90,1862,5,2,3,68.89,5.56 93,1862,14,10,4,61.29,17.2 179,1862,22,19,3,62.01,8.89 99,1861,22,16,6,67.68,27.27 111,1862,16,11,4,78.38,8.11 107,1863,17,11,5,60.75,5.61 156,1862,32,30,2,60.9,12.82 152,1862,23,21,2,73.55,6.41 72,1863,7,3,3,54.17,20.83 134,1862,22,21,1,67.91,9.7 180,1862,23,16,4,69.44,3.89 143,1863,23,19,4,81.12,8.39 110,1862,16,12,2,31.82,9.09 157,1862,15,10,5,52.23,24.84 101,1863,4,1,3,58.42,18.81 115,1862,14,11,3,86.96,5.22 103,1862,7,6,1,70.87,0 90,1862,11,11,0,70,4.44 105,1862,20,17,3,80,4.76 104,1862,11,9,1,29.81,9.62 102,1862,17,10,7,49.02,6.86 112,1862,19,14,5,26.79,14.29 87,1862,6,3,3,8.05,72.41 92,1862,4,3,0,11.96,86.96 108,1862,12,7,3,16.67,25 86,1864,0,0,0,2.33,11.63 82,1864,4,3,1,81.71,8.54 76,1864,1,0,1,48.68,6.58 79,1864,0,0,0,15.19,21.52 85,1864,1,1,0,89.41,3.53 85,1864,1,1,0,56.47,0 85,1864,0,0,0,31.76,15.29 87,1864,6,5,0,81.61,3.45 85,1864,5,5,0,72.94,0 83,1864,0,0,0,46.99,2.38 101,1 100,1861,5,2,3,5,38 112,1863,11,7,4,59.82,15.18 113,1862,34,32,1,79.65,2.65 90,1862,5,2,3,68.89 ,5.56 93,1862,14,10,4,61.29,17.2 179,1862,22,19,3,62.01,8.89 99,1861,22,16,6,67.68,27.27 2,11,6,1816 ,78.38,8.11 107,1863,17,11,5,60.75,5.61 156,1862,32,30,2,60.9,12.82 152,1862,23,21,2,73.55,6,1367 ,3,54.17,20.83 134,1862,22,21,1,67.91,9.7 180,1862,23,16,4,69.44,3.89 143,1863,23,19,4,8​​1.12,16,103. ,12,2,31.82,9.09 157,1862,15,10,5,52.23,24.84 101,1863,4,1,3,58.42,18.81 115,1862,14,11,3,86.2816,20 ,7,6,1,70.87,0 90,1862,11,11,0,70,4.44 105,1862,20,17,3,80,4.76 104,1862,11,9,1,29.81,9.62 102 ,1862,17,10,7,49.02,6.86 112,1862,19,14,5,26.79,14.29 87,1862,6,3,3,8.05,72.41 92,1862,4,3,0,11. 86.96 108,1862,12,7,3,16.67,25 86,1864,0,0,0,2.33,11.63 82,1864,4,3,1,81.71,8.54 76,1864,1,0,1, 48.68,6.58 79,1864,0,0,0,15.19,21.52 85,1864,1,1,0,89.41,3.53 85,1864,1,1,0,56.47,0 85,1864,0,0, 0,31.76,15.29 87,1864,6,5,0,81.61,3.45 85,1864,5,5,0,72.94,0 83,1864,0,0,0,46.99,2.38 101,1 864,5,5,0,1.98,95.05 99,1864,6,6,0,42.42,9.09 10,1864,0,0,0,50,9 98,1864,6,6,0,79.59,3.06 10,1864,0,0,0,71,9 78,1864,5,5,0,70.51,1.28 89,1864,4,4,0,59.55,13.48 864,5,5,0,1.98,95.05 99,1864,6,6,0,42.42,9.09 10,1864,0,0,0,50,9 98,1864,6,6,0,79.59,3.06 10,1864,0,0,0,71,9 78,1864,5,5,0,70.51,1.28 89,1864,4,4,0,59.55,13.48

Let me add that this is a common task, but seems to not be answered by any forums I've read thus I've asked this.让我补充一点,这是一项常见任务,但我读过的任何论坛似乎都没有回答,因此我提出了这个问题。 I could give you my broken code, but I don't want to waste your time with code that is not functionally correct.我可以给你我损坏的代码,但我不想浪费你的时间在功能不正确的代码上。 Sorry I've asked it this way.抱歉我这样问过。 I just don't understand the APIs and the documentation doesn't tell me the data types.我只是不了解 API,文档也没有告诉我数据类型。

Here is the latest code I have that reads the CSV into two ndarrays:这是我将 CSV 读入两个 ndarray 的最新代码:

#!/usr/bin/env python
import tensorflow as tf
import csv
import numpy as np
from numpy import genfromtxt

# Build Example Data is CSV format, but use Iris data
from sklearn import datasets
from sklearn.cross_validation import train_test_split
import sklearn
def buildDataFromIris():
    iris = datasets.load_iris()
    data = np.loadtxt(open("t100.csv.out","rb"),delimiter=",",skiprows=0)
    labels = np.copy(data)
    labels = labels[:,924]
    print "labels: ", type (labels), labels.shape, labels.ndim
    data = np.delete(data, [924], axis=1)
    print "data: ", type (data), data.shape, data.ndim

And here is base code that I want to use.这是我想使用的基本代码。 The example this came from wasn't complete either.这来自的示例也不完整。 The APIs in the links below are vague.以下链接中的 API 含糊不清。 If I can at least figure out the data types input into DNNRegressor and the others in the docs, I might be able to write some custom code.如果我至少可以弄清楚输入到 DNNRegressor 和文档中的其他数据类型的数据类型,我也许可以编写一些自定义代码。

estimator = DNNRegressor(
    feature_columns=[education_emb, occupation_emb],
    hidden_units=[1024, 512, 256])

# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = DNNRegressor(
    feature_columns=[education_emb, occupation_emb],
    hidden_units=[1024, 512, 256],
    optimizer=tf.train.ProximalAdagradOptimizer(
      learning_rate=0.1,
      l1_regularization_strength=0.001
    ))

# Input builders
def input_fn_train: # returns x, Y
  pass
estimator.fit(input_fn=input_fn_train)

def input_fn_eval: # returns x, Y
  pass
estimator.evaluate(input_fn=input_fn_eval)
estimator.predict(x=x)

And then the big question is how to get these to work together.然后最大的问题是如何让这些协同工作。

Here are a few pages I've been looking at.这是我一直在看的几页。

I've found lower-level Tensorflow pretty hard to figure out in the past as well.我发现过去也很难弄清楚较低级别的 Tensorflow。 And the documentation hasn't been amazing.并且文档并不令人惊奇。 If you instead focus on getting the hang of sklearn , you should find it relatively easy to work with skflow .如果您转而专注于掌握sklearn ,您应该会发现使用skflow相对容易。 skflow is at a much higher level than tensorflow and it has almost the same api is sklearn . skflow级别比tensorflow高得多,并且它的 api 几乎与sklearn相同。

Now to the answer:现在回答:

As a regression example, we'll just perform regression on the iris dataset.作为回归示例,我们将仅对 iris 数据集执行回归。 Now this is a silly idea, but it's just to demonstrate how to use DNNRegressor .现在这是一个愚蠢的想法,但这只是为了演示如何使用DNNRegressor

Skflow API Skflow API

The first time you use a new API, try to use as few parameters as possible.第一次使用新 API 时,请尝试使用尽可能少的参数。 You just want to get something working.你只是想让一些东西工作。 So, I propose you can set up a DNNRegressor like this:因此,我建议您可以像这样设置DNNRegressor

estimator = skflow.DNNRegressor(hidden_units=[16, 16])

I kept my # hidden units small because I don't have much computational power right now.我保持我的 # 个隐藏单元很小,因为我现在没有太多的计算能力。

Then you give it the training data, train_X , and training labels train_y and you fit it as follows:然后你给它训练数据train_X和训练标签train_y并按如下方式拟合它:

estimator.fit(train_X, train_y)

This is the standard procedure for all sklearn classifiers and regressors and skflow just extends tensorflow to be similar to sklearn .这是所有sklearn分类器和回归器的标准程序, skflow只是将tensorflow扩展为类似于sklearn I also set the parameter steps = 10 so that the training finishes faster when it only runs for 10 iterations.我还设置了参数steps = 10以便在仅运行 10 次迭代时训练完成得更快。

Now, if you want it to predict on some new data, test_X , you do that as follows:现在,如果您希望它预测一些新数据test_X ,您可以按如下方式进行:

pred = estimator.predict(test_X)

Again, this is standard procedure for all sklearn code.同样,这是所有sklearn代码的标准程序。 So that's it - skflow is so simplified you just need those three lines!就是这样 - skflow非常简单,你只需要这三行!

What's the format of train_X and train_y? train_X 和 train_y 的格式是什么?

If you aren't too familiar with machine learning, your training data is generally an ndarray (matrix) of size M xd where you have M training examples and d features.如果您不太熟悉机器学习,您的训练数据通常是大小为 M xd 的ndarray (矩阵),其中有 M 个训练示例和 d 个特征。 Your labels are M x 1 ( ndarray of shape (M,) ).你的标签是 M x 1 (形状为(M,) ndarray )。

So what you have is something like this:所以你所拥有的是这样的:

Features:   Sepal Width    Sepal Length ...               Labels
          [   5.1            2.5             ]         [0 (setosa)     ]
  X =     [   2.3            2.4             ]     y = [1 (virginica)  ]
          [   ...             ...            ]         [    ....       ]
          [   1.3            4.5             ]         [2 (Versicolour)]

(note I just made all those numbers up). (请注意,我只是把所有这些数字都编了起来)。

The test data will just be an N xd matrix where you have N test examples.测试数据只是一个 N xd 矩阵,其中有 N 个测试示例。 The test examples all need to have d features.测试示例都需要有 d 个特征。 The predict function will take in the test data and return to you the test labels of shape N x 1 ( ndarray of shape (N,) )预测函数将接收测试数据并返回给您形状为 N x 1 的测试标签(形状为(N,) ndarray

You didn't supply your .csv file so I'll let you parse the data into that format.您没有提供 .csv 文件,因此我会让您将数据解析为该格式。 Conveniently though, we can use sklearn.datsets.load_iris() to get the X and y we want.不过方便的是,我们可以使用sklearn.datsets.load_iris()来获得我们想要的Xy It's just只是

iris = datasets.load_iris()
X = iris.data 
y = iris.target

Using a Regressor as a Classifier使用回归器作为分类器

The output of your DNNRegressor will be a bunch of real numbers (like 1.6789). DNNRegressor的输出将是一堆实数(如 1.6789)。 But the iris-dataset has labels 0, 1, and 2 - the integer IDs for Setosa, Versicolour, and Virginia.但是 iris-dataset 有标签 0、1 和 2——Setosa、Versicolour 和 Virginia 的整数 ID。 To perform a classification with this regressor, we will just round it to the nearest label (0, 1, 2).要使用此回归器进行分类,我们只需将其四舍五入到最近的标签 (0, 1, 2)。 For example, a prediction of 1.6789 will round to 2.例如,1.6789 的预测将四舍五入为 2。

Working Example工作示例

I find I learn the most with a working example.我发现我从一个工作示例中学到的东西最多。 So here's a very simplified working example:所以这是一个非常简化的工作示例:

在此处输入图片说明

Feel free to post any further questions as a comment.随意发表任何进一步的问题作为评论。

I ended up with a few options.我最终得到了几个选项。 I don't know why it was so difficult to get up and running.我不知道为什么起床和跑步如此困难。 First, here is the code based on @user2570465.首先,这是基于@user2570465 的代码。

import tensorflow as tf
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
import tensorflow.contrib.learn as skflow

def buildDataFromIris():
    iris = datasets.load_iris()
    return iris.data, iris.target

X, y = buildDataFromIris()
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X)
estimator = skflow.DNNRegressor( feature_columns=feature_cols, hidden_units=[10, 10])
train_X, test_X, train_y, test_y = train_test_split(X, y)
estimator.fit(X, y, steps=10)

test_preds = estimator.predict(test_X)

def CalculateAccuracy(X, y):
    continuous_predictions = estimator.predict(X)
    closest_class = []
    for pred in continuous_predictions:
        differences = np.array([abs(pred-1), abs(pred-1), abs(pred-1)])
        closest_class.append(np.argmin(differences))

    num_correct = np.sum(closest_class == y)
    accuracy = float(num_correct)/len(y)
    return accuracy

train_accuracy = CalculateAccuracy(train_X, train_y)
test_accuracy = CalculateAccuracy(test_X, test_y)

print("Train accuracy: %f" % train_accuracy)
print("Test accuracy: %f" % test_accuracy)

The other solutions built the model from smaller components.其他解决方案从较小的组件构建模型。 Here is a snippet that computes Sig(X*W1+b1)*W2+b2 = Y. Optimizer=Adam, loss=L2, eval=L2 and MSE.这是一个计算 Sig(X*W1+b1)*W2+b2 = Y. Optimizer=Adam, loss=L2, eval=L2 和 MSE 的片段。

x_train = X[:train_size]
y_train = Y[:train_size]
x_val = X[train_size:]
y_val = Y[train_size:]
print("x_train: {}".format(x_train.shape))

x_train = all_x[:train_size]
print("x_train: {}".format(x_train.shape))
# y_train = func(x_train)
# x_val = all_x[train_size:]
# y_val = func(x_val)

# plt.figure(1)
# plt.scatter(x_train, y_train, c='blue', label='train')
# plt.scatter(x_val, y_val, c='red', label='validation')
# plt.legend()
# plt.savefig("../img/nn_mlp1.png")


#build the model
"""
X = [
"""
X = tf.placeholder(tf.float32, [None, n_input], name = 'X')
Y = tf.placeholder(tf.float32, [None, n_output], name = 'Y')

w_h = tf.Variable(tf.random_uniform([n_input, layer1_neurons], minval=-1, maxval=1, dtype=tf.float32))
b_h = tf.Variable(tf.zeros([1, layer1_neurons], dtype=tf.float32))
h = tf.nn.sigmoid(tf.matmul(X, w_h) + b_h)

w_o = tf.Variable(tf.random_uniform([layer1_neurons, 1], minval=-1, maxval=1, dtype=tf.float32))
b_o = tf.Variable(tf.zeros([1, 1], dtype=tf.float32))
model = tf.matmul(h, w_o) + b_o

train_op = tf.train.AdamOptimizer().minimize(tf.nn.l2_loss(model - Y))
tf.nn.l2_loss(model - Y)

#output = sum((model - Y) ** 2)/2
output = tf.reduce_sum(tf.square(model - Y))/2

#launch the session
sess = tf.Session()
sess.run(tf.initialize_all_variables())

errors = []
for i in range(numEpochs):
    for start, end in zip(range(0, len(x_train), batchSize), range(batchSize, len(x_train), batchSize)):
        sess.run(train_op, feed_dict={X: x_train[start:end], Y: y_train[start:end]})
    cost = sess.run(tf.nn.l2_loss(model - y_val), feed_dict={X: x_val})
    errors.append(cost)
    if i%100 == 0: print("epoch %d, cost = %g" % (i,cost))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM