简体   繁体   中英

Removing 'initializer' layers in Tensorflow

I am trying to save my trained model to a file to ship to my inference server, and then benchmark it. The only issue is that my model (using keras) has init layers (such as conv1/weights/RandomUniform, for example) still in it, which can not be run on the device I am benchmarking.

So, how do I go about removing all of these layers from my graph?

I've tried using tfgraph.finalize() , convert_variables_to_constants , and remove_training_nodes , and none seem to remove these nodes.

The exact layer it breaks on is: 'res_net50/conv1/kernel/Initializer/random_uniform/RandomUniform'

The strip_unused_nodes transformation allows you to provide a set of input and output nodes. The graph transformation tool will remove all branches of the graph that do not feed into the output nodes.

Having said that, my gut feeling is that removing the initialisation nodes will not have a significant impact on inference time because the operations will not be evaluated. They do however consume some memory if they are not pruned.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM