简体   繁体   English

使用Tensorflow服务的双向流

[英]Bidirectional Streaming using Tensorflow Serving

I have a model that accepts an arbitrary-length stream of data and performs classification. 我有一个模型接受任意长度的数据流并执行分类。 I use Tensorflow Serving to listen to gRPC requests and perform the classification on a trained model. 我使用Tensorflow服务来收听gRPC请求并在训练有素的模型上执行分类。

Google Cloud Speech API has the "Streaming Speech Recognition" feature, that is available when using gRPC requests, which "allows you to stream audio to the Cloud Speech API and receive a stream [of] speech recognition results in real time as the audio is processed". Google Cloud Speech API具有“流式语音识别”功能,可在使用gRPC请求时使用,该功能“允许您将音频流式传输到Cloud Speech API并实时接收语音识别结果,因为音频是处理”。

I believe this is possible due to the Bidirectional streaming RPC that is described in gRPC documentation whereby "the server and client could “ping-pong”: the server gets a request, then sends back a response, then the client sends another request based on the response, and so on". 我相信这是可能的,因为gRPC文档中描述的双向流式RPC,“服务器和客户端可以”乒乓“:服务器获取请求,然后发回响应,然后客户端发送另一个请求,基于响应,等等“。

So now I'm wondering whether I can achieve something similar to Google Cloud Speech API streaming recognition using Tensorflow Serving. 所以现在我想知道我是否可以使用Tensorflow服务实现类似于Google Cloud Speech API流识别的功能。 The only reference I could find about this in the official docs of TF Serving (unless I missed something) was when describing possible future improvements : "Servables can be of any type and interface, enabling flexibility and future improvements such as: streaming results [...]". 我在TF服务的官方文档中找到的唯一参考资料(除非我遗漏了一些内容)是在描述可能的未来改进时 :“服务可以是任何类型和接口,实现灵活性和未来的改进,例如:流式结果[。 ..]”。

Is it already possible to achieve this functionality (bidirectional streaming) using TF Serving? 是否已经可以使用TF服务实现此功能(双向流)? If so, how? 如果是这样,怎么样? If not, what would be the best way to go about extending TF Serving to add this feature? 如果没有,那么扩展TF服务以添加此功能的最佳方法是什么?

It appears that this is in fact currently unavailable. 事实上,这实际上是不可用的。 I've filed a feature request on TensorFlow Serving GitHub repository and it was declined with the following response: 我已经在TensorFlow服务GitHub存储库上提交了一个功能请求 ,并且拒绝了以下响应:

TF-Serving does not support this use-case out of the box at this time, but as you allude to it would be possible to leverage its extensiblity to do so without doing major surgery on the internals. TF-Serving目前不支持这种用例开箱即用,但正如你所暗示的那样,可以利用其可扩展性来实现这一目的,而无需对内部进行大手术。 We do not currently have any short-term plans to add it to the official distribution. 我们目前没有任何短期计划将其添加到官方发行版中。

They were also unwilling to give any guidance: 他们也不愿意提供任何指导:

Unfortunately this falls outside the scope of what we can support at this time, so you'll have to devise a solution and keep it in your own repo. 不幸的是,这超出了我们目前可以支持的范围,因此您必须设计一个解决方案并将其保存在您自己的回购中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM