简体   繁体   中英

Debugging batching in Tensorflow Serving (no effect observed)

I have a small web server that gets input in terms of sentences and needs to return a model prediction using Tensorflow Serving. It's working all fine and well using our single GPU, but now I'd like to enable batching such that Tensorflow Serving waits a bit to group incoming sentences before processing them together in one batch on the GPU.

I'm using the predesigned server framework with the predesigned batching framework using the initial release of Tensorflow Serving. I'm enabling batching using the --batching flag and have set batch_timeout_micros = 10000 and max_batch_size = 1000 . The logging does confirm that batching is enabled and that the GPU is being used.

However, when sending requests to the serving server the batching has minimal effect. Sending 50 requests at the same time almost linearly scales in terms of time usage with sending 5 requests. Interestingly, the predict() function of the server is run once for each request (see here ), which suggests to me that the batching is not being handled properly.

Am I missing something? How do I check what's wrong with the batching?


Note that this is different from How to do batching in Tensorflow Serving? as that question only examines how to send multiple requests from a single client, but not how to enable Tensorflow Serving's behind-the-scenes batching for multiple separate requests.

(I am not familiar with the server framework, but I'm quite familiar with HPC and with cuBLAS and cuDNN, the libraries TF uses to do its dot products and convolutions on GPU)

There are several issues that could cause disappointing performance scaling with the batch size.

I/O overhead , by which I mean network transfers, disk access (for large data), serialization, deserialization and similar cruft. These things tend to be linear in the size of the data.

To look into this overhead, I suggest you deploy 2 models: one that you actually need, and one that's trivial, but uses the same I/O, then subtract the time needed by one from another.

This time difference should be similar to the time running the complex model takes, when you use it directly, without the I/O overhead.

If the bottleneck is in the I/O, speeding up the GPU work is inconsequential.

Note that even if increasing the batch size makes the GPU faster, it might make the whole thing slower, because the GPU now has to wait for the I/O of the whole batch to finish to even start working.

cuDNN scaling: Things like matmul need large batch sizes to achieve their optimal throughput, but convolutions using cuDNN might not (At least it hasn't been my experience, but this might depend on the version and the GPU arch)

RAM, GPU RAM, or PCIe bandwidth-limited models: If your model's bottleneck is in any of these, it probably won't benefit from bigger batch sizes.

The way to check this is to run your model directly (perhaps with mock input), compare the timing to the aforementioned time difference and plot it as a function of the batch size.


By the way, as per the performance guide , one thing you could try is using the NCHW layout, if you are not already. There are other tips there.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM