简体   繁体   中英

Tensorflow Serving - more names for the same model

I want to redirect two different URLs to the same model request, so that I don't need to create another base_path or version. If I write my config file as bellow, does it cache my model twice?

models.config :

model_config_list {
  config {
    name: 'name1',
    base_path: '/models/model/',
    model_platform: "tensorflow"
  },
  config {
    name: 'name2',
    base_path: '/models/model/',
    model_platform: "tensorflow"
  }
}

So I ran this project to watch over my Docker containers and started modelserving one model. Then I added the same model path and version as I described in the question, but with a different name. The alocated memory for serving was near the double, and with three models it was near three times.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM