简体   繁体   中英

Tensorflow Python - easiest way to create a NN from plain data (txt file)

I have the following training data:

input -- output
1993,0,420,3,4,6 -- 1,0
1990,0,300,5,3,5 -- 0,1
1991,1,300,9,4,3 -- 0.5,0.5
...

So there are 6 input layers and 2 output layers, in output values can be 1,0 0,1 or 0.5,0.5

What's the easiest way to pass this data to tensorflow and to train a NN?

At this point I'm not (yet) interested in the best network architecture, I just would like to have a Python script to train a NN.

Thanks!

Guido, you can train your neural network using this, this code has one hidden layer you just have to create tf graph

import tensorflow as tf

input_size = 6
output_size = 2
hidden_size = 6
input_y
input_data



with tf.variable_scope("hidden_Layer"):
    weight = tf.truncated_normal([input_size, hidden_size], stddev=0.01)
    bias = tf.constant(0.1, shape=[hidden_size])
    hidden_input = tf.nn.bias_add(tf.matmul(input_data, weight), bias)
    hidden_output = tf.nn.relu(hidden_input, name="Hidden_Output")

with tf.variable_scope("output_Layer"):
    weight2 = tf.truncated_normal([hidden_size, output_size], stddev=0.01)
    bias2 = tf.constant(0.1, shape=[output_size])
    logits = tf.nn.bias_add(tf.matmul(hidden_output, weight2), bias2)
    predictions = tf.argmax(logits, 1, name="predictions")
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=input_y))
    correct_predictions = tf.equal(self.predictions, tf.argmax(input_y, 1))
    classification_accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM