I have a gRPC server with the following proto:
syntax = "proto3";
service MyServicer {
rpc DoSomething(stream InputBigData) returns (stream OutputBigData) {}
}
message InputBigData {
bytes data = 1;
}
message OutputBigData {
bytes data = 1;
}
And my server is created with the following Python code:
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10),
options=[('grpc.max_receive_message_length', -1),
('grpc.max_send_message_length', -1))])
max_receive_message_length and max_send_message_length are set to -1 to allow the transfer of big messages (typically 8Mb). The client also define the same options.
Case 1: Consider the client sends to the server InputBigData at a higher rate than the server can afford. How can I configure how many InputBigData (or bytes) can be queued in the input stream?
Case 2: Consider the client reads the response OutputBigData from the server at a lower rate than the client can afford. How can I configure how many OutputBigData (or bytes) can be queued in the output stream?
I know gRPC flow control is based on HTTP/2: https://httpwg.org/specs/rfc7540.html#FlowControl I tried to set grpc.http2.write_buffer_size at 67108864 (seems to be the max value) but nothing happened.
Here is an implementation which highlights the case 2:
# server.py
from concurrent import futures
import grpc
import myservicer_pb2_grpc, myservicer_pb2
class MyServicer(myservicer_pb2_grpc.MyServicer):
def DoSomething(self, request_iterator, target, **kwargs):
big_data = b'0' * 1920*1080*4
for r in request_iterator:
print("server received input big data")
yield myservicer_pb2.OutputBigData(data=big_data)
print("server sent output big data")
if __name__ == '__main__':
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10),
options=[('grpc.max_receive_message_length', -1),
('grpc.max_send_message_length', -1)])
myservicer_pb2_grpc.add_MyServicerServicer_to_server(
MyServicer(), server)
server.add_insecure_port("[::]:50051")
server.start()
server.wait_for_termination()
# client.py
import time
import grpc
import myservicer_pb2_grpc
import myservicer_pb2
def big_data_generator():
big_data = b'0' * 1920*1080*4
for i in range(100):
yield myservicer_pb2.InputBigData(data=big_data)
def run():
with grpc.insecure_channel('localhost:50051',
options=[('grpc.max_send_message_length', -1),
('grpc.max_receive_message_length', -1)]) as channel:
stub = myservicer_pb2_grpc.MyServicerStub(channel)
res = stub.DoSomething(big_data_generator())
for r in res:
print("Client received data")
time.sleep(10)
if __name__ == '__main__':
run()
After 10 seconds my server output is:
server received input big data
server sent output big data
server received input big data
server sent output big data
server received input big data
And my client output is:
Client received data
My server received 3 InputBigData and sent 2 OutputBigData. It is now blocked until the client consumes the output data. In this scenario I want to increase (2 or 3 times) the output buffer size so it can continue to process more input data even if the client is late in consuming the result.
Thanks for the detailed question. I tried your example, but still can't tune gRPC to increase its window size freely.
gRPC Channel arguments can be found here . The flow control implementation is here There are only several might affect flow-control, which are:
grpc.http2.bdp_probe=0
: disables automatic window increase grpc.http2.max_frame_size
: HTTP/2 max frame size grpc.http2.write_buffer_size
: Not really a flow-control option, it is used for GRPC_WRITE_BUFFER_HINT (write without blocking). Also, GRPC_WRITE_BUFFER_HINT is not supported yet in gRPC PythonThere is no argument that could trigger a window size update. The default window size is 64KB. gRPC will increase the window size via BDP estimation. Eg, on my laptop, the client-outbound window size increased to 8380679 (~8MB). But I yet to find a way to manually intervene this process.
So, unfortunately, you might need application-level buffering. You could use coroutines in asyncio or threading with a thread-safe queue on both the client-side and server-side.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.