简体   繁体   中英

How handle a POST request with Kafka, Alpakka Kafka, Play Framework and Websocket?

Let's say I have two kafka topics, request_topic for my Post requests, and response_topic for my responses.

This are the models:

case class Request(requestId: String, body: String)
case class Response(responseId: String, body: String, requestId: String)

This is my socket handler

def socket = WebSocket.accept[String, String] { req =>
  val requestId = ??? // Generate a unique requestId

  val in: Sink[String, Future[Done]] = Sink.foreach[String]{ msg =>
    val record = new ProducerRecord[String, Request]("request_topic", "key", Request(requestId, msg))
    val producer: KafkaProducer[String, Request] = ???
    Future { producer.send(record).get }
  }

  // Once produced, some stream processing apps will manage to process request and publish the reponse to response_topic
  // The Request and Response object are linked by the requestId field.

  val consumerSettings = ???
  val out: Source[ConsumerRecord[String, Response], _] = Consumer
    .plainSource(consumerSettings, Subscriptions.topics("response_topic"))
    .filter(cr => cr.value.requestId == requestId)
    .map(cr => someResponseString(cr.value))

  Flow.formSinkAndSource(in, out)
}

def someResponseString(res: Response): String = ???

Basically, for each incoming message, I publish a Request object to Kafka, then the request is processed by some stream processing app (not shown here) and hopefully a response is published back to Kafka.

I have some concerns here:

1 - Will Alpakka Kafka Connector create a new instance of the connector for each incoming message or will it use the same instance as long as Play is running?

2 - Is it a good idea to filter response based on individual requestId, or should I send the whole stream back to each Client, and let them filter the response based on the requestId they are interested in.

3 - Am I wrong in everything? (I am a real newbie in Websocket)

Thanks in advance.

1) Depends how you configure it. For example, in the in: Sink body, you're making a new KafkaProducer for each message. Instead, you should have one producer for the entire application.

I'm not sure how Akka / Play's threading models work, but most webservers start a new thread for each incoming connection, up to a fixed number of threads in a threadpool.

2) I would like to think filtering as soon as possible would be preferred, as well as doing as much as possible on server-side. This saves on bandwidth back to the client.

Also, if you are looking to only push data from Kafka on the web-server to a client in one direction, you would likely want SSE, not Websocket

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM