简体   繁体   English

Tensorflow服务重新训练开始

[英]Tensorflow serving retrained inception

I am trying to serve my retrained inception model following this guide (you may also see this guide , which explains how to retrain inception). 我正在尝试按照本指南提供我的再培训初始模型(您可能还会看到本指南 ,它解释了如何重新训练开始)。 I've modified retrain.py to export my model as follows: 我已经修改了retrain.py来导出我的模型,如下所示:

... # Same as in the original script:
# Set up the pre-trained graph.
maybe_download_and_extract()
graph, bottleneck_tensor, jpeg_data_tensor, resized_image_tensor = (create_inception_graph())
... # Same as in the original script:
# Add the new layer that we'll be training.
(train_step, cross_entropy, bottleneck_input, ground_truth_input, final_tensor) = add_final_training_ops(len(image_lists.keys()),
                                         FLAGS.final_tensor_name,
                                         bottleneck_tensor)
... # Added at the end of the original script:
# Export model
with graph.as_default():
    export_path = sys.argv[-1]
    print('Exporting trained model to', export_path)
    saver = tf.train.Saver(sharded=True)
    model_exporter = exporter.Exporter(saver)
    signature = exporter.classification_signature(input_tensor=jpeg_data_tensor, scores_tensor=final_tensor)
    model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
    model_exporter.export(export_path, tf.constant(FLAGS.export_version), sess)
    print('Done exporting!')

if __name__ == '__main__':
  tf.app.run()

After exporting my model I start running the server: 导出我的模型后,我开始运行服务器:

/serving/bazel-bin/tensorflow_serving/example/inception_inference --port=9000 EXPORT_DIR &> inception_log &

Server log file (inception_log) contains: 服务器日志文件(inception_log)包含:

I tensorflow_serving/core/basic_manager.cc:190] Using InlineExecutor for BasicManager.
I tensorflow_serving/example/inception_inference.cc:384] Waiting for models to be loaded...
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:147] File-system polling found servable version {name: default version: 1} at path /tf_files/scope/export/00000001
I external/org_tensorflow/tensorflow/contrib/session_bundle/session_bundle.cc:129] Attempting to load a SessionBundle from: /tf_files/scope/export/00000001
I tensorflow_serving/example/inception_inference.cc:384] Waiting for models to be loaded...
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:147] File-system polling found servable version {name: default version: 1} at path /tf_files/scope/export/00000001
I external/org_tensorflow/tensorflow/contrib/session_bundle/session_bundle.cc:106] Running restore op for SessionBundle
I external/org_tensorflow/tensorflow/contrib/session_bundle/session_bundle.cc:203] Done loading SessionBundle
I tensorflow_serving/example/inception_inference.cc:350] Running...
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:147] File-system polling found servable version {name: default version: 1} at path /tf_files/scope/export/00000001
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:147] File-system polling found servable version {name: default version: 1} at path /tf_files/scope/export/00000001
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:147] File-system polling found servable version {name: default version: 1} at path /tf_files/scope/export/00000001
... 

Finally, I run the client and I get the following error: 最后,我运行客户端,我收到以下错误:

/serving/bazel-bin/tensorflow_serving/example/inception_client --server=localhost:9000 --image=TEST_IMG
D0805 09:10:46.208704633     200 ev_posix.c:101]             Using polling engine: poll
Traceback (most recent call last):
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/tensorflow_serving/example/inception_client.py", line 53, in <module>
    tf.app.run()
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/external/org_tensorflow/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/tensorflow_serving/example/inception_client.py", line 48, in main
    result = stub.Classify(request, 10.0)  # 10 secs timeout
  File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 300, in __call__
    self._request_serializer, self._response_deserializer)
  File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 198, in _blocking_unary_unary
    raise _abortion_error(rpc_error_call)
    grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="FetchOutputs node : not found")
E0805 09:10:47.129263239     200 chttp2_transport.c:1810]    close_transport: {"created":"@1470388247.129230608","description":"FD shutdown","file":"src/core/lib/iomgr/ev_poll_posix.c","file_line":427}

Any advice or guidance in this matter would be greatly appreciated. 任何有关此事的建议或指导都将不胜感激。

So the link on the tensorflow website is just one way to fully serve the model from my experience. 因此,tensorflow网站上的链接只是从我的经验中全面服务模型的一种方式。 A better way to serve the model would be to serve it up from Flask and Kubernetes instead, since its a lighter weight client than all of the tensorflow serving infrastructure, but this is assuming that the volume you have isn't very large(>100 QPS) although you could serve Inception up with Flask and Kubernetes with that load, but at that rate I would op for the in-line solution. 服务该模型的更好方法是从Flask和Kubernetes提供服务,因为它比所有tensorflow服务基础设施更轻量级客户端,但这假设您拥有的体积不是很大(> 100) QPS)虽然您可以使用Flask和Kubernetes提供Inception,但是按照这个速度,我会选择内联解决方案。

You could serve it from a remote service, and that would work, but depending on your infrastructure you "could" also serve that model up in a streaming job that pushed your requests through an apache_beam.DoFn then outputting it out back to a MQ that your job was listening to. 您可以通过远程服务提供服务,这可以工作,但根据您的基础架构,您“可以”在流式作业中提供该模型,该工作通过apache_beam.DoFn推送您的请求,然后将其输出回MQ你的工作正在倾听。 This is just another solution. 这只是另一种解决方案。 Hope this helps. 希望这可以帮助。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import logging 
import tensorflow as tf
import numpy as np
import apache_beam as beam


class InferenceFn(beam.DoFn):

  def __init__(self, model_dict):
    super(InferenceFn, self).__init__()
    self.model_dict = model_dict
    self.graph = None
    self.create_graph()


  def create_graph(self):
    if not tf.gfile.FastGFile(self.model_dict['model_full_path']):
      self.download_model_file()
    with tf.Graph().as_default() as graph:
      with tf.gfile.FastGFile(self.model_dict['model_full_path'], 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')
    self.graph = graph

  def start_bundle(self):
    """Prevents graph object serialization until serving. Required for GCP Serving"""
    self.create_graph()

  def process(self, element):
    """Core Processing Fn for Apache Beam."""
    try:
      with tf.Session(graph=self.graph) as sess:
        if not tf.gfile.Exists(element):
          tf.logging.fatal('File does not exist %s', element)
          raise ReferenceError("Couldnt Find the image {}".format(element))
        data = tf.gfile.FastGFile(element, 'rb').read()
        output_tensor = sess.graph.get_tensor_by_name(self.model_dict['output_tensor_name'])
        predictions = sess.run(softmax_tensor, {self.model_dict['input_tensor_name']: data})
        predictions = np.squeeze(predictions)
        yield str(predictions)
    except Exception:
      logging.error("We hit an error in inference on {}".format(element))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM