[英]Speed Up Multiple Model Inference on EDGE TPU
I have retrained a RESNET50 model for reidentification on EDGE TPU.我已经重新训练了一个 RESNET50 模型,用于在 EDGE TPU 上重新识别。 However, it seems to be no way to fetch a batch of image to EDGE_TPU.
但是,似乎没有办法将一批图像提取到EDGE_TPU。
I have come up with a solution of running multiple same model for images.我想出了一个为图像运行多个相同模型的解决方案。
However, is there anyway to speed up the model inference for multiple model?但是,无论如何可以加快多个模型的模型推理? The threading now is even slower than single model inference
现在的线程甚至比单模型推理还要慢
Yeah, the edgetpu's architect won't allow processing in batch size.是的,edgetpu 的架构师不允许批量处理。 Have you tried model pipelining?
您是否尝试过模型流水线? https://coral.ai/docs/edgetpu/pipeline/
https://coral.ai/docs/edgetpu/pipeline/
Unfortunately only available in C++ right now, but we're looking to extends it to python in mid Q4.不幸的是,目前仅在 C++ 中可用,但我们希望在第四季度中期将其扩展到 python。
Because batch inference is not available now, so pipelining is another secondary option.因为批量推理现在不可用,所以流水线是另一个次要选项。 However, after experiencing with my model, we can make a psuedo batch by feeding multiple single input for EDGE_TPU as another option
但是,在体验了我的模型之后,我们可以通过为 EDGE_TPU 提供多个单一输入作为另一种选择来制作伪批处理
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.