[英]How to poll a CompletionQueue when implementing a C++ grpc async client?
[英]GRPC/C++ - How to detect client disconnected in Async Server
我正在使用此示例的代碼來創建我的 GRPC 異步服務器:
#include <memory>
#include <iostream>
#include <string>
#include <thread>
#include <grpcpp/grpcpp.h>
#include <grpc/support/log.h>
#ifdef BAZEL_BUILD
#include "examples/protos/helloworld.grpc.pb.h"
#else
#include "helloworld.grpc.pb.h"
#endif
using grpc::Server;
using grpc::ServerAsyncResponseWriter;
using grpc::ServerBuilder;
using grpc::ServerContext;
using grpc::ServerCompletionQueue;
using grpc::Status;
using helloworld::HelloRequest;
using helloworld::HelloReply;
using helloworld::Greeter;
class ServerImpl final {
public:
~ServerImpl() {
server_->Shutdown();
// Always shutdown the completion queue after the server.
cq_->Shutdown();
}
// There is no shutdown handling in this code.
void Run() {
std::string server_address("0.0.0.0:50051");
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service_" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *asynchronous* service.
//LINES ADDED BY ME TO IMPLEMENT KEEPALIVE
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 2000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 3000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
//END OF LINES ADDED BY ME
builder.RegisterService(&service_);
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
// Finally assemble the server.
server_ = builder.BuildAndStart();
std::cout << "Server listening on " << server_address << std::endl;
// Proceed to the server's main loop.
HandleRpcs();
}
private:
// Class encompasing the state and logic needed to serve a request.
class CallData {
public:
// Take in the "service" instance (in this case representing an asynchronous
// server) and the completion queue "cq" used for asynchronous communication
// with the gRPC runtime.
CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
: service_(service), cq_(cq), responder_(&ctx_), status_(CREATE) {
// Invoke the serving logic right away.
Proceed();
}
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}
private:
// The means of communication with the gRPC runtime for an asynchronous
// server.
Greeter::AsyncService* service_;
// The producer-consumer queue where for asynchronous server notifications.
ServerCompletionQueue* cq_;
// Context for the rpc, allowing to tweak aspects of it such as the use
// of compression, authentication, as well as to send metadata back to the
// client.
ServerContext ctx_;
// What we get from the client.
HelloRequest request_;
// What we send back to the client.
HelloReply reply_;
// The means to get back to the client.
ServerAsyncResponseWriter<HelloReply> responder_;
// Let's implement a tiny state machine with the following states.
enum CallStatus { CREATE, PROCESS, FINISH };
CallStatus status_; // The current serving state.
};
// This can be run in multiple threads if needed.
void HandleRpcs() {
// Spawn a new CallData instance to serve new clients.
new CallData(&service_, cq_.get());
void* tag; // uniquely identifies a request.
bool ok;
while (true) {
// Block waiting to read the next event from the completion queue. The
// event is uniquely identified by its tag, which in this case is the
// memory address of a CallData instance.
// The return value of Next should always be checked. This return value
// tells us whether there is any kind of event or cq_ is shutting down.
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
}
std::unique_ptr<ServerCompletionQueue> cq_;
Greeter::AsyncService service_;
std::unique_ptr<Server> server_;
};
int main(int argc, char** argv) {
ServerImpl server;
server.Run();
return 0;
}
因為我做了一項研究,在那里我發現我必須實現 KeepAlive ( https://grpc.github.io/grpc/cpp/md_doc_keepalive.html ) 我添加了這些行:
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 2000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 3000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
到目前為止一切順利,服務器工作正常,通信流暢。 但是,如何檢測客戶端已斷開連接? 我添加的KeepAlive
所謂的KeepAlive
方法似乎對我不起作用。
當客戶端因任何原因斷開連接時,我的錯誤在哪里,如何在異步服務器上檢測到?
讓我從一些背景信息開始。
了解 gRPC 很重要的一件事是它使用 HTTP/2 在單個 TCP 連接上多路復用多個流。 每個 gRPC 調用都是一個單獨的流,無論調用是一元的還是流的。 一般而言,任何 gRPC 調用都可以從雙方發送零個或多個消息; 一元調用只是一種特殊情況,從客戶端到服務器只有一條消息,然后從服務器到客戶端只有一條消息。
我們通常使用“斷開連接”一詞來表示 TCP 連接中斷,而不是單個流終止,盡管有時人們會使用相反的含義。 我不確定你在這里指的是哪一個,所以我會回答兩個。
gRPC API 向應用程序公開流生命周期,但不公開 TCP 連接生命周期。 目的是該庫處理管理 TCP 連接的所有細節並將它們隱藏在應用程序之外——我們實際上並沒有公開一種方法來判斷連接何時斷開,您不需要關心,因為庫將自動為您重新連接。 :) 對應用程序可見的唯一情況是,如果在單個 TCP 連接失敗時已經有流在傳輸,則這些流將失敗。
正如我所說,該庫確實向應用程序公開了各個流的生命周期; 流的生命周期基本上就是上面代碼中CallData
對象的生命周期。 有兩種方法可以確定流是否已終止。 一種是顯式調用ServerContext::IsCancelled()
。 另一種是在 CQ 上請求一個事件,通過ServerContext::AsyncNotifyWhenDone()
異步通知應用取消。
請注意,一般來說,像上面的 HelloWorld 這樣的一元示例並不真正需要擔心檢測流取消,因為從服務器的角度來看,整個流實際上並不會持續很長時間。 在流式調用的情況下,它通常更有用。 但是也有一些例外,例如如果您有一個一元調用,它必須在發送響應之前執行大量昂貴的異步工作。
我希望這些信息有幫助。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.