简体   繁体   中英

Ways to speedup model _warmup_ for webgl backend?

I have a model and strict requirements for warm up time = 2s. Runtime speed is ok for us now.

I already tried to set

tf.env().set('WEBGL_EXP_CONV', true)
tf.env().set('WEBGL_USE_SHAPES_UNIFORMS', true) 

I turned on parallel shaders compilation. But still have time about 4s. Individual shaders for every tensor shape help to speedup runtime but slow down warm up process as lead to shaders count increase.

Is there any full list of possible current improvements to check, what I missed?

My environment Windows + Intel embedded GPU (but MacOS is also among target platforms)

do a compile-only pass to warmup the webgl engine:

  • set tf.env().set('ENGINE_COMPILE_ONLY', true); ,
  • run one model.execute with tensor created with tf.zeros
  • and at the end set compile to false.

its an experimental feature and only works for models that do not explicitly require async execution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM