简体   繁体   English

你能网状网络谷歌珊瑚TPU吗

[英]Can You Mesh Network Google Coral TPUs

I believe it is possible to leverage the power of two USB Connected connected Google Coral TPUs in conjunction with one another (or at least side by side, running their own inferences each).我相信可以利用两个 USB Connected 连接的 Google Coral TPU 的强大功能相互结合(或至少并排,各自运行自己的推理)。

However, is it possible to leverage two or more Google Coral TPUs which are connected to separate Edge Devices but which sit on the same network connected by hard line ethernets & a switch?但是,是否可以利用两个或多个连接到单独边缘设备但位于通过硬线以太网和交换机连接的同一网络上的 Google Coral TPU?

Humn, there is no official API for doing this.嗯,没有官方 API 可以做到这一点。 So I guess I'll give you a more general answer.所以我想我会给你一个更一般的答案。

  • You may want to look into kubenetes ?您可能想研究kubenetes I have not tried it, but it seems that they have supports for aarch64 which should works perfectly on the dev board.我没有尝试过,但似乎他们支持 aarch64,它应该可以在开发板上完美运行。

  • Create servers and communicate via http?创建服务器并通过 http 进行通信? I actually have an opensource project calls restor , unfortunately it hasn't been maintained.其实我有一个开源项目通话RESTOR ,遗憾的是它并没有被维持。 But you may also check out doods .但您也可以查看doods

Possibilities are endless :)可能性是无止境的:)

You may run multiple models on a TPU or also you may run multiple TPUs on a single device.您可以在 TPU 上运行多个模型,也可以在单个设备上运行多个 TPU。 Look into this: https://coral.ai/docs/edgetpu/multiple-edgetpu/看看这个: https://coral.ai/docs/edgetpu/multiple-edgetpu/

If you would run multiple models on a TPU, the TPU will probably keep switching between models to load them.如果您要在 TPU 上运行多个模型,TPU 可能会不断在模型之间切换以加载它们。 But, you may combine them using the compiler tool to avoid this.但是,您可以使用编译器工具将它们组合起来以避免这种情况。 If you would use multiple TPU devices on a single device, you may tell the interpreter instance which device you mean.如果您要在单个设备上使用多个 TPU 设备,您可以告诉解释器实例您指的是哪个设备。

All the information are in the above link.所有信息都在上面的链接中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM