简体   繁体   中英

grpc c++ async completion queue events

I am trying to understand the grpc c++ async model flow. This article ( link ) already explains many of my doubts. Here is the code for grpc_asycn_server . To understand when CompletionQueue is getting requests, I added a few print statements as follows:

First inside the HandleRpcs() function.

void HandleRpcs() {
    // Spawn a new CallData instance to serve new clients.
    new CallData(&service_, cq_.get());
    void* tag;  // uniquely identifies a request.
    bool ok;
    int i = 0;
    while (true) {
      std::cout << i << std::endl; ///////////////////////////////
      // Block waiting to read the next event from the completion queue. The
      // event is uniquely identified by its tag, which in this case is the
      // memory address of a CallData instance.
      // The return value of Next should always be checked. This return value
      // tells us whether there is any kind of event or cq_ is shutting down.
      GPR_ASSERT(cq_->Next(&tag, &ok));
      GPR_ASSERT(ok);
      static_cast<CallData*>(tag)->Proceed();
      i++;
    }
  }

and inside the proceed() function:

void Proceed() {
  if (status_ == CREATE) {
    // Make this instance progress to the PROCESS state.
    status_ = PROCESS;

    // As part of the initial CREATE state, we *request* that the system
    // start processing SayHello requests. In this request, "this" acts are
    // the tag uniquely identifying the request (so that different CallData
    // instances can serve different requests concurrently), in this case
    // the memory address of this CallData instance.
    std::cout<<"RequestSayHello called"<<std::endl; ////////////////////////////
    service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
                              this);
  } else if (status_ == PROCESS) {
    // Spawn a new CallData instance to serve new clients while we process
    // the one for this CallData. The instance will deallocate itself as
    // part of its FINISH state.
    new CallData(service_, cq_);

    // The actual processing.
    std::string prefix("Hello ");
    reply_.set_message(prefix + request_.name());

    // And we are done! Let the gRPC runtime know we've finished, using the
    // memory address of this instance as the uniquely identifying tag for
    // the event.
    status_ = FINISH;
    responder_.Finish(reply_, Status::OK, this);
  } else {
    std::cout<<"deallocated"<<std::endl; ////////////////////////////
    GPR_ASSERT(status_ == FINISH);
    // Once in the FINISH state, deallocate ourselves (CallData).
    delete this;
  }
}

Once I run the server and the one client ( client ) then the server prints the following:

RequestSayHello called
i = 0
RequestSayHello called
i = 1
deallocated
i = 2

The second RequestSayHello called makes sense because of the creation of new CallData instance. My question is how come proceed() function executed the second time and deallocated gets printed?

The completion queue ( cq_ ) structure handles several types of events, including both request and response events. The first call to proceed() enters the PROCESS stage of the state machine for the CallData object.

During this stage:
1. A new CallData object is created; this inserts a request event into cq_ as you mentioned
2. The responder_ is called with the reply object; this inserts a response event into cq_

Upon receiving the response event from cq_ , proceed() is called again on the first CallData object, which is now in the FINISH state, so clean up is performed and deallocated is printed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM