简体   繁体   English

如何使用经过训练的 Tensorflow 模型进行预测

[英]How to predict using trained Tensorflow model

I have created and trained a neural network but I would like to be able to input test points and see its results (rather than using an eval function).我已经创建并训练了一个神经网络,但我希望能够输入测试点并查看其结果(而不是使用 eval 函数)。

The model runs fine and the cost reduces every epoch, but I just want to add a line at the end to pass some input coordinates and have it tell me the predicted transformed coordinates.该模型运行良好,并且每个时期的成本都会降低,但我只想在最后添加一条线以传递一些输入坐标并让它告诉我预测的变换坐标。

import tensorflow as tf
import numpy as np

def coordinate_transform(size, angle):
    input = np.random.rand(size, 2)
    output = np.zeros((size, 2))
    noise = 0.05*(np.add(np.random.rand(size) * 2, -1))
    theta = np.add(np.add(np.arctan(input[:,1] / input[:,0]) , angle) , noise)
    radii = np.sqrt(np.square(input[:,0]) + np.square(input[:,1]))
    output[:,0] = np.multiply(radii, np.cos(theta))
    output[:,1] = np.multiply(radii, np.sin(theta))
    return input, output

#Data
input, output = coordinate_transform(2000, np.pi/2)
train_in = input[:1000]
train_out = output[:1000]
test_in = input[1000:]
test_out = output[1000:]

# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 1
display_step = 1

# Network Parameters
n_hidden_1 = 100 # 1st layer number of features
n_input = 2 # [x,y]
n_classes = 2 # output x,y coords

# tf Graph input
x = tf.placeholder("float", [1,n_input])
y = tf.placeholder("float", [1, n_input])

# Create model
def multilayer_perceptron(x, weights, biases):
    # Hidden layer with RELU activation
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    # Output layer with linear activation
    out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
    return out_layer

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
#cost = tf.losses.mean_squared_error(0, (tf.slice(pred, 0, 1) - x)**2 + (tf.slice(pred, 1, 1) - y)**2)
cost = tf.losses.mean_squared_error(y, pred)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
optimizer = optimizer.minimize(cost)

# Initializing the variables
#init = tf.global_variables_initializer()
init = tf.initialize_all_variables()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = 1000#int(len(train_in)/batch_size)
        # Loop over all batches
        for i in range(total_batch):
            batch_x = train_in[i].reshape((1,2))
            batch_y = train_out[i].reshape((1,2))

            #print(batch_x.shape)
            #print(batch_y.shape)
            #print(batch_y, batch_x)
            # Run optimization op (backprop) and cost op (to get loss value)
            _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                          y: batch_y})
            # Compute average loss
            avg_cost += c / total_batch
        # Display logs per epoch step
        if epoch % display_step == 0:
            print ("Epoch:", '%04d' % (epoch+1), "cost=", \
                "{:.9f}".format(avg_cost))
    print("Optimization Finished!")

    # Test model
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    # Calculate accuracy
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

    #Make predictions

Well the 'pred' op is your actual outcome (as it's used to compare with y when calculating the loss), so something like the following should do the trick:那么“pred”操作是您的实际结果(因为它在计算损失时用于与 y 进行比较),因此以下内容应该可以解决问题:

print(sess.run([pred], feed_dict={x: _INPUT_GOES_HERE_ })

Obviously _INPUT_GOES_HERE_ will need to be replaced by the actual input.显然_INPUT_GOES_HERE_将需要由实际输入替换。

You can also use the tensorflow.python.saved_model libs to save your model in a format that can be served by TensorFlow Serving.您还可以使用 tensorflow.python.saved_model 库以可以由 TensorFlow Serving 提供服务的格式保存您的模型。

TensorFlow Serving recently became a whole lot easier to install and setup: TensorFlow Serving 最近变得更容易安装和设置:

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#installing-using-apt-get https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#installing-using-apt-get

Below is some sample code (you'll need to adjust the feeds/inputs and fetches/outputs for your use case).下面是一些示例代码(您需要为您的用例调整提要/输入和获取/输出)。

Create SignatureDef for your model:为您的模型创建 SignatureDef:

from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils

graph = tf.get_default_graph()

x_observed = graph.get_tensor_by_name('x_observed:0')
y_pred = graph.get_tensor_by_name('add:0')

tensor_info_x_observed = utils.build_tensor_info(x_observed)
print(tensor_info_x_observed)

tensor_info_y_pred = utils.build_tensor_info(y_pred)
print(tensor_info_y_pred)

prediction_signature = signature_def_utils.build_signature_def(inputs = 
                {'x_observed': tensor_info_x_observed}, 
                outputs = {'y_pred': tensor_info_y_pred}, 
                method_name = signature_constants.PREDICT_METHOD_NAME)

Using SaveModelBuilder to save your model with the SignatureDef assets defined above:使用 SaveModelBuilder 使用上面定义的 SignatureDef 资产保存模型:

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants

unoptimized_saved_model_path = '/root/models/linear_unoptimized/cpu/%s' % version
print(unoptimized_saved_model_path)

builder = saved_model_builder.SavedModelBuilder(unoptimized_saved_model_path)
builder.add_meta_graph_and_variables(sess, 
                                     [tag_constants.SERVING],
                                     signature_def_map={'predict':prediction_signature,                                     
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature}, 
                                     clear_devices=True,
)

builder.save(as_text=False)

More details in the github and docker repos referenced here: http://pipeline.ai此处引用的 github 和 docker 存储库中的更多详细信息: http : //pipeline.ai

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM