简体   繁体   中英

In tensorflow 1.15, what's the difference of using explicit XLA compilation and Auto-clustering?

I'm trying to learn how to use XLA for my models. And I'm looking at the doc from official here:https://www.tensorflow.org/xla#enable_xla_for_tensorflow_models . It was documented that there are two methods to enable XLA: 1) Explicit compilation by using @tf.function(jit_compile=True) to decorate your training function. 2) Auto-clustering by setting environment variables.

As I'm using tensorflow 1.15, not 2.x. So I think the second approach is the same as using this statement:

config.graph_options.optimizer_options.global_jit_level = (
        tf.OptimizerOptions.ON_1)

You can also found info from here: https://www.tensorflow.org/xla/tutorials/autoclustering_xla . It seems this is what they used in tf2.x:

tf.config.optimizer.set_jit(True) # Enable XLA.

I think they are the same, correct me if I'm wrong.

OK, so if using the first approach, I think in tf1.15, this is equivalent to using

tf.xla.experimental.compile(computation)

So, my question is if I have used tf.xla.experimental.compile(computation) to decorate my whole training function. Is this equivalent to use

config.graph_options.optimizer_options.global_jit_level = (
        tf.OptimizerOptions.ON_1)

? Anybody knows? Much appreciated.

According to this video from TF team (2021) , clustering will automatically look for places to optimize. Nevertheless, due to an unpredictable behaviour, they recommend decorating tf.fuctions with @tf.function(jit_compile=True) over using out-of-the-box clustering.

In case you want to use autoclustering, set_jit(True) is being deprecated and the most correct way now is tf.config.optimizer.set_jit('autoclustering')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM