简体   繁体   中英

Testing the trained neural network using tensorflow in Python

I have an excel file which consists of columns like this:

Disp    force   Set-1   Set-2
0        0         0      0
0.000100011 10.85980847 10.79430294 10.89428425
0.000200021 21.71961695 21.58860588 21.7885685
0.000350037 38.00932966 37.780056   38.12999725

To model my neuralnetwork for the above data(considering first 2 columns as Inputs and next 2 columns as my outputs),I tried to write a simple feedforward neural network in python:

import tensorflow as tf
import numpy as np
import pandas as pd
#import matplotlib.pyplot as plt
rng = np.random

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
#################################################
# In[180]:

# Parameters
learning_rate = 0.01
training_epochs = 20
display_step = 1

# Read data from CSV
a = r'C:\Downloads\international-financial-statistics\DataUpdated.csv'
df = pd.read_csv(a,encoding = "ISO-8859-1")

# Seperating out dependent & independent variable

train_x = df[['Disp','force']]
train_y = df[['Set-1','Set-2']]
#############################################added by me
trainx=StandardScaler().fit_transform(train_x)
trainy=StandardScaler().fit_transform(train_y)
#### after training during the testing...test set should be scaled separately and then once when you get the output you need to rescale it back

n_input = 2
n_classes = 2
n_hidden_1 = 40
n_hidden_2 = 40
n_samples = 2100

# tf Graph Input
#Inserts a placeholder for a tensor that will be always fed.
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Set model weights
W_h1 = tf.Variable(tf.random_normal([n_input, n_hidden_1]))
W_h2 = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]))
W_out = tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
b_h1 = tf.Variable(tf.zeros([n_hidden_1]))
b_h2 = tf.Variable(tf.zeros([n_hidden_2]))
b_out = tf.Variable(tf.zeros([n_classes]))


# Construct a linear model
layer_1 = tf.add(tf.matmul(x, W_h1), b_h1)
layer_1 = tf.nn.relu(layer_1)
layer_2 = tf.add(tf.matmul(layer_1, W_h2), b_h2)
layer_2 = tf.nn.relu(layer_2)
out_layer = tf.matmul(layer_2, W_out) + b_out

# Mean squared error
cost = tf.reduce_mean(tf.pow(out_layer-y, 2))/(2*n_samples)
# Gradient descent
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(training_epochs):
        _, c = sess.run([optimizer, cost], feed_dict={x: trainx,y: trainy})

        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c))

    print("Optimization Finished!")
    training_cost = sess.run(cost, feed_dict={x: trainx,y: trainy})
    print(training_cost)

    best = sess.run([out_layer], feed_dict={x: np.array([[0.0001,10.85981]])})
    print(best)

I would like to know the correct method to be used for testing the accuracy of my neural network. For eg: I would like to pass the inputs 0.000100011; 10.85980847 and retrieve the two associated outputs for these inputs. I tried to write it but it is giving me bad results(you can look at my above code,especially last 2 lines)

Thanks in Advance.

Since your output values continuous it is a regression problem. So you can use root mean square error as a metric to measure your error rate and also use cross validation to evaluate the model so that model can be tested on unseen data. A sample example is shown here https://github.com/naveenkambham/MachineLearningModels/blob/master/NeuralNetwork.py

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM