简体   繁体   中英

What are the hardware requirements for the use of pre-trained models in applications?

I know this is most likely a complicated question, which is probably answered by "it depends".

But is it a good idea to use pre-trained models in client-side applications which might lack stronger hardware? Or should models only ever be used on stronger systems, which are providing the needed services via an API to the weaker clients?

If the model is trained and doesn't need additional training, ie it's only going to make predictions, computational resources are likely not going to be a constraint. But, of course, it depends...

If the model was pre-trained, but needs additional training eg transfer of learning in a NN where every layer but the last one was pre-trained or any model that needs to be constantly updated with new data, now of course client-side resources can be a constraint. Now, I would say the issue here is not availability of resources but necessity: If you don't need to train the model in the client side, don't do it. C'mon, who wants to use an application that hangs your phone or consumes the battery in minutes?

But what if the model needs to be trained with sensible data that cannot leave the client side? or maybe it needs to be used offline? In those cases, the client side hardware becomes a forced constraint, not an option.

So, I would say it depends ... but in the deployment scenario, not the model itself or the hardware.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM