简体   繁体   中英

Tensorflow: preload multiple models

General question: How can you prevent that a model needs to be rebuild for each inference request?

I'm trying to develop a web-service that contains multiple trained models which can be used to request a prediction. Producing a results is now very time consuming because the model needs to be rebuild for each request. The inferring itself only takes 30ms but importing the model takes more than a second.
I'm having difficulty splitting the importing and inference into two separate methods because of the needed session.

The solution i came up with is by using an InteractiveSession that is stored in a variable. On creation of the object the model gets loaded inside of this session that remains open. When a request is submitted this preloaded model is than used to generate the result.

Problem with this solution:
When creating multiple of this objects for different models, multiple Interactive sessions are open at the same time. Tensorflow generate the following warning:

Nesting violated for default stack of <class 'tensorflow.python.framework.ops.Graph'> objects

Any ideas how to manage multiple sessions and preload models?

class model_inference:
    def __init__(self, language_name, base_module="models"):
        """
        Load a network that can be used to perform inference.

        Args:

            lang_class (str): The name of an importable language class,
                returning an instance of `BaseLanguageModel`. This class
                should be importable from `base_module`.

            base_module (str):  The module from which to import the
                `language_name` class.

        Attributes:

            chkpt (str): The model checkpoint value.
            infer_model (g2p_tensor.nmt.model_helper.InferModel):
                The language infor_model instance.
        """

        language_instance = getattr(
            importlib.import_module(base_module), language_name
        )()
        self.ckpt = language_instance.checkpoint
        self.infer_model = language_instance.infer_model
        self.hparams = language_instance.hparams
        self.rebuild_infer_model()

    def rebuild_infer_model(self):
        """
        recreate infer model after changing hparams
        This is time consuming.
        :return:
        """
        self.session = tf.InteractiveSession(
            graph=self.infer_model.graph, config=utils.get_config_proto()
        )
        self.model = model_helper.load_model(
            self.infer_model.model, self.ckpt, self.session, "infer"
        )

    def infer_once(self, in_string):
        """
        Entrypoint of service, should not contain rebuilding of the model.
        """
        in_data = tokenize_input_string(in_string)

        self.session.run(
            self.infer_model.iterator.initializer,
            feed_dict={
                self.infer_model.src_placeholder: [in_data],
                self.infer_model.batch_size_placeholder: self.hparams.infer_batch_size,
            },
        )

        subword_option = self.hparams.subword_option
        beam_width = self.hparams.beam_width
        tgt_eos = self.hparams.eos
        num_translations_per_input = self.hparams.num_translations_per_input

        num_sentences = 0

        num_translations_per_input = max(
            min(num_translations_per_input, beam_width), 1
        )

        nmt_outputs, _ = self.model.decode(self.session)
        if beam_width == 0:
            nmt_outputs = np.expand_dims(nmt_outputs, 0)

        batch_size = nmt_outputs.shape[1]
        num_sentences += batch_size

        for sent_id in range(batch_size):
            for beam_id in range(num_translations_per_input):
                translation = nmt_utils.get_translation(
                    nmt_outputs[beam_id],
                    sent_id,
                    tgt_eos=tgt_eos,
                    subword_option=subword_option,
                )
        return untokenize_output_string(translation.decode("utf-8"))

    def __del__(self):
        self.session.close()

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.session.close()

With the help of jdehesa's comments i understood what went wrong.
When not specifying which graph needs to be used. Tensorflow makes a new instance of a graph and adds the operations to it. That's why just changing the InteractiveSession to a normal Session to not nest interactive sessions will throw a new error ValueError: Operation name: "init_all_tables" op: "NoOp" is not an element of this graph.

The use of a InteractiveSession worked because it sets the defined graph to be used as default in stead of creating a new instance. The problem with the InteractiveSession is that its very bad to leave multiple sessions open at the same time. Tensorflow will throw a warning.

The solution was the following: When changing the InteractiveSession to a normal Session you need to explicitly define in which graph you want to reload the model with model_helper.load_model . This can be done by defining a context: with self.infer_model.graph.as_default():

The eventual solution was the following:

def rebuild_infer_model(self):
    """
    recreate infer model after changing hparams
    This is time consuming.
    :return:
    """
    self.session = tf.Session(
        graph=self.infer_model.graph, config=utils.get_config_proto()
    )
    # added line:
    with self.infer_model.graph.as_default(): # the model should be loaded within the same graph as when infering!!
        model_helper.load_model(
            self.infer_model.model, self.ckpt, self.session, "infer"
        )

def infer_once(self, in_string):
    """
    Turn an orthographic transcription into a phonetic transcription
    The transcription is processed all at once
    Long transcriptions may result in incomplete phonetic output
    :param in_string: orthographic transcription
    :return: string of the phonetic representation
    """
    # added line:
    with self.infer_model.graph.as_default():
        in_data = tokenize_input_string(in_string)

        self.session.run(
            self.infer_model.iterator.initializer,
            feed_dict={
                self.infer_model.src_placeholder: [in_data],
                self.infer_model.batch_size_placeholder: self.hparams.infer_batch_size,
            },
        )

        subword_option = self.hparams.subword_option
        beam_width = self.hparams.beam_width
        tgt_eos = self.hparams.eos
        num_translations_per_input = self.hparams.num_translations_per_input

        num_sentences = 0

        num_translations_per_input = max(
            min(num_translations_per_input, beam_width), 1
        )

        nmt_outputs, _ = self.infer_model.model.decode(self.session)
        if beam_width == 0:
            nmt_outputs = np.expand_dims(nmt_outputs, 0)

        batch_size = nmt_outputs.shape[1]
        num_sentences += batch_size

        for sent_id in range(batch_size):
            for beam_id in range(num_translations_per_input):
                translation = nmt_utils.get_translation(
                    nmt_outputs[beam_id],
                    sent_id,
                    tgt_eos=tgt_eos,
                    subword_option=subword_option,
                )
    return untokenize_output_string(translation.decode("utf-8"))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM