简体   繁体   English

客户端和服务器之间的TensorBoard调试器和gRPC最大消息大小

[英]TensorBoard debugger and gRPC max message size between client and server

Trying to debug a TensorFlow model with the new TensorBoard debugger GUI. 尝试使用新的TensorBoard调试器GUI调试TensorFlow模型。 The communication between client (TensorFlow side) and server (TensorBoard side) fails with the following message: 客户端(TensorFlow端)和服务器(TensorBoard端)之间的通信失败,并显示以下消息:

grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.RESOURCE_EXHAUSTED, Received message larger than max (11373336 vs. 4194304))> grpc._channel._Rendezvous:<以(StatusCode.RESOURCE_EXHAUSTED,收到的消息大于最大值(11373336与4194304))终止的RPC的_Rendezvous

Apparently the issue is well known in general and there are ways tricks to modify the max message size in grpc. 显然,此问题通常是众所周知的,并且有一些技巧可以修改grpc中的最大消息大小。 However, in TensorFlow this is transparent to the user given that I am using the tf_debug.TensorBoardDebugWrapperSession wrapper. 但是,在TensorFlow中,这对用户是透明的,因为我正在使用tf_debug.TensorBoardDebugWrapperSession包装器。

My question is how to increase the max message size so I can debug my model. 我的问题是如何增加最大消息大小,以便调试模型。 I am using TensorFlow 1.6 with Python 3.6. 我正在将TensorFlow 1.6与Python 3.6一起使用。

Thank you! 谢谢!

Can you try creating TensorBoardDebugWrapperSession or TensorBoardDebugHook with the keyword argument send_source=False as a workaround? 您是否可以尝试使用关键字参数send_source=False创建TensorBoardDebugWrapperSessionTensorBoardDebugHook ,作为解决方法? The root cause is that large source file sizes and/or large number of source files causes the gRPC message to exceed the 4-MB message size limit. 根本原因是较大的源文件大小和/或大量的源文件导致gRPC消息超过4 MB消息大小限制。

The issue will be fixed in the next release of TensorFlow and TensorBoard. 该问题将在TensorFlow和TensorBoard的下一版本中修复。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM