[英]Logging requests being served by tensorflow serving model
I have built a model using tesnorflow serving and also ran it on server using this command:-我使用 tesnorflow 服务构建了一个模型,并使用以下命令在服务器上运行它:-
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow-models-repository/ETA
But now this screen is stagnant, not giving any info about incoming requests and resonses.但是现在这个屏幕停滞不前,没有提供有关传入请求和响应的任何信息。 I tried to use TF_CPP_MIN_VLOG_LEVEL=1 flag.
我尝试使用 TF_CPP_MIN_VLOG_LEVEL=1 标志。 But now it is giving so much output and still no logging/monitoring about incoming requests/responses.
但是现在它提供了如此多的输出,但仍然没有记录/监控传入的请求/响应。
Pls suggest how to view those logs.请建议如何查看这些日志。
Second problem I m facing is how to run this process in background and monitor it constantly.我面临的第二个问题是如何在后台运行这个进程并不断监控它。 Lets suppose i closed the console then also this process should be running and how to reconnect that process console again and see real time traffic.
假设我关闭了控制台,那么这个进程也应该运行,以及如何再次重新连接该进程控制台并查看实时流量。
Any suggestions will be helpful.任何建议都会有所帮助。
When you run this command below, you are starting a process of tensorflow model server which serves the model at a port number (9009 over here).当您在下面运行此命令时,您正在启动一个 tensorflow 模型服务器进程,该进程在端口号(此处为 9009)处为模型提供服务。
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009
--model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow-
models-repository/ETA
You are not displaying the logs here,but the model server running.您不是在此处显示日志,而是在运行模型服务器。 This is the reason why the screen is stagnant.
这就是屏幕停滞的原因。 You need to use the flag
-v=1
when you run the above command to display the logs on your console运行上述命令时需要使用标志
-v=1
在控制台上显示日志
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009 --model_name='model_name' --model_base_path=model_path
Now moving to your logging/monitoring of incoming requests/responses.现在转到您对传入请求/响应的日志记录/监控。 You cannot monitor the incoming requests/responses when the VLOG is set to 1. VLOGs is called Verbose logs.
当 VLOG 设置为 1 时,您无法监控传入的请求/响应。 VLOG 称为详细日志。 You need to use the
log level 3
to display all errors, warnings, and some informational messages related to processing times (INFO1 and STAT1).您需要使用
log level 3
来显示所有错误、警告和一些与处理时间(INFO1 和 STAT1)相关的信息性消息。 Please look into the given link for further details on VLOGS.请查看给定的链接以获取有关 VLOG 的更多详细信息。 http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm
http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm
Now moving your second problem.现在移动你的第二个问题。 I would suggest you to use environment variables provided by Tensorflow serving
export TF_CPP_MIN_VLOG_LEVEL=3
instead of setting flags.我建议您使用 Tensorflow 提供的环境变量来提供
export TF_CPP_MIN_VLOG_LEVEL=3
而不是设置标志。 Set the environment variable before you start the server.在启动服务器之前设置环境变量。 After that, please enter the below command to start your server and store the logs to a logfile named
mylog
之后,请输入以下命令启动您的服务器并将日志存储到名为
mylog
的日志文件中
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &
. bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &
. Even though you close your console, all the logs gets stored as the model server runs.即使您关闭控制台,所有日志也会在模型服务器运行时存储。 Hope this helps.
希望这会有所帮助。
For rudimentary HTTP request logging, you can set TF_CPP_VMODULE=http_server=1
to set the VLOG level just for the module http_server.cc
— that will get you a very bare request log showing incoming requests and some basic error cases:对于基本的 HTTP 请求日志记录,您可以设置
TF_CPP_VMODULE=http_server=1
来为模块http_server.cc
设置 VLOG 级别——这将为您提供一个非常简单的请求日志,显示传入的请求和一些基本的错误情况:
2020-08-26 10:42:47.225542: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 761 bytes.
2020-08-26 10:44:32.472497: I tensorflow_serving/model_servers/http_server.cc:139] Ignoring HTTP request: GET /someboguspath
2020-08-26 10:51:36.540963: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/someboguspath body: 0 bytes.
2020-08-26 10:51:36.541012: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/someboguspath Error: Invalid argument: Malformed request: GET /v1/someboguspath
2020-08-26 10:53:17.039291: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/models/someboguspath body: 0 bytes.
2020-08-26 10:53:17.039456: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/models/someboguspath Error: Not found: Could not find any versions of model someboguspath
2020-08-26 11:01:43.466636: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 755 bytes.
2020-08-26 11:01:43.473195: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: Incompatible shapes: [1,38,768] vs. [1,40,768]
[[{{node model/transformer/embeddings/add}}]]
2020-08-26 11:02:56.435942: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 754 bytes.
2020-08-26 11:02:56.436762: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: JSON Parse error: Missing a comma or ']' after an array element. at offset: 61
... you can skim https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/http_server.cc for occurrences of VLOG(1) <<
to see all logging statements in this module. ...您可以浏览https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/http_server.cc以查看
VLOG(1) <<
出现,以查看此模块中的所有日志记录语句。
For gRPC probably there's some corresponding module that you can similarly enable VLOG for — I haven't gone looking for it.对于 gRPC,可能有一些相应的模块,您可以类似地为 VLOG 启用 - 我没有去寻找它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.