简体   繁体   English

Akka Cluster感知路由器-将Redis实例共享给所有路由

[英]Akka Cluster aware routers - share redis instance to all routee

In a context of Akka cluster application, I met an issue about one property expected by Akka : every (cas) class and every message used must be serializable. 在Akka集群应用程序的上下文中,我遇到了一个有关Akka期望的属性的问题:每个(cas)类和使用的每个消息都必须可序列化。 I have the following context : I want to consume data from a redis cluster and for that, I decided to adopt the cluster aware router pool to add nodes to have more workers. 我具有以下上下文:我想使用Redis集群中的数据,为此,我决定采用支持集群的路由器池来添加节点以拥有更多工作线程。 Workers read data from redis and store some metadata in mongodb. 工作者从redis读取数据并将一些元数据存储在mongodb中。 In a first version, I did this : 在第一个版本中,我这样做:

object MasterWorkers {

  def props
  (  awsBucket : String,
     gapMinValueMicroSec : Long,
     persistentCache: RedisCache,
     mongoURI : String,
     mongoDBName : String,
     mongoCollectioName : String
  ) : Props =
    Props(MasterWorkers(awsBucket, gapMinValueMicroSec, persistentCache, mongoURI, mongoDBName, mongoCollectioName))

  case class JobRemove(deviceId: DeviceId, from : Timestamp, to : Timestamp)
}

case class MasterWorkers
(
  awsBucket : String,
  gapMinValueMicroSec : Long,
  persistentCache: RedisCache,
  mongoURI : String,
  mongoDBName : String,
  mongoCollectioName : String
) extends Actor with ActorLogging {

  val workerRouter =
    context.actorOf(FromConfig.props(Props(classOf[Worker],awsBucket,gapMinValueMicroSec, self, persistentCache, mongoURI, mongoDBName, mongoCollectioName)),
    name = "workerRouter")

Worker class : 工人阶级:

object Worker {

  def props
  (
    awsBucket : String,
    gapMinValueMicroSec : Long,
    replyTo : ActorRef,
    persistentCache: RedisCache,
    mongoURI : String,
    mongoDBName : String,
    mongoCollectioName : String
  ) : Props =
    Props(Worker(awsBucket, gapMinValueMicroSec, replyTo, persistentCache, mongoURI, mongoDBName, mongoCollectioName))

  case class JobDumpFailed(deviceId : DeviceId, from: Timestamp, to: Timestamp)
  case class JobDumpSuccess(deviceId : DeviceId, from: Timestamp, to: Timestamp)

  case class JobRemoveFailed(deviceId : DeviceId, from: Timestamp, to: Timestamp)
}

case class Worker
(
  awsBucket : String,
  gapMinValueMicroSec : Long,
  replyTo : ActorRef,
  persistentCache: RedisCache,
  mongoURI : String,
  mongoDBName : String,
  mongoCollectioName : String
) extends Actor with ActorLogging {

But this raises the below exception when I started two nodes : 但这在我启动两个节点时引发以下异常:

[info] akka.remote.MessageSerializer$SerializationException: Failed to serialize remote message [class akka.remote.DaemonMsgCreate] using serializer [class akka.remote.serialization.DaemonMsgCreateSerializer].
[info] at akka.remote.MessageSerializer$.serialize(MessageSerializer.scala:61)
[info] at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:895)
[info] at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:895)
[info] at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
[info] at akka.remote.EndpointWriter.serializeMessage(Endpoint.scala:894)
[info] at akka.remote.EndpointWriter.writeSend(Endpoint.scala:786)
[info] at akka.remote.EndpointWriter$$anonfun$4.applyOrElse(Endpoint.scala:761)
[info] at akka.actor.Actor$class.aroundReceive(Actor.scala:497)
[info] at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:452)
[info] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
[info] at akka.actor.ActorCell.invoke(ActorCell.scala:495)
[info] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
[info] at akka.dispatch.Mailbox.run(Mailbox.scala:224)
[info] at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
[info] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[info] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[info] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[info] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[info] Caused by: java.io.NotSerializableException: akka.actor.ActorSystemImpl

The redis cache is simple case class with a companion object implementing a interface like this : Redis缓存是简单案例类,带有一个实现如下接口的伴侣对象:

object RedisCache { // some static functions }

case class RedisCache
(
  master : RedisServer,
  slaves : Seq[RedisServer]
)(implicit actorSystem : ActorSystem)
  extends PersistentCache[DeviceKey, BCPPackets] with LazyLogging {
// some code here
}

Then to solve the issue, I moved the redisCache in the worker and I'm not giving it to the master node : 然后要解决此问题,我将redisCache移到了worker中,但没有将其提供给master节点:

case class Worker
(
  awsBucket : String,
  gapMinValueMicroSec : Long,
  replyTo : ActorRef,
  mongoURI : String,
  mongoDBName : String,
  mongoCollectioName : String
) extends Actor with ActorLogging {

// redis cache here now 
val redisCache = ...

But with such design, every routee will create a new instance of the redis cache and it's not the expected behaviour. 但是通过这种设计,每个路由都会创建一个Redis缓存的新实例,这不是预期的行为。 What I want is to have one instance of my redis cache and then share it with all my routees but in a context of a cluster application, it seems to not be possible so I don't know if it's a design failure or some missing experience with Akka. 我想要的是拥有一个Redis缓存实例,然后与所有路由共享它,但是在集群应用程序的上下文中,这似乎是不可能的,所以我不知道这是设计失败还是缺少一些经验与Akka。 If anyone met similar issues, I take advices with pleasure ! 如果有人遇到类似问题,我会很乐意接受建议!

The problem is that your RedisCache is not that simple. 问题在于您的RedisCache并不那么简单。 It carries around an ActorSystem , which cannot be serialized. 它带有ActorSystem ,该ActorSystem无法序列化。

I guess this is because it's containing RedisClient instances from - eg - rediscala library, and these require an ActorSystem . 我猜这是因为它包含来自例如rediscala库的RedisClient实例,并且这些实例需要ActorSystem

You will need to abstract from the actor system, and only pass to your workers the bare details of the Redis cluster (ie the RedisServer objects). 您将需要从actor系统中抽象出来,并且仅将Redis集群的裸露细节(即RedisServer对象)传递给您的工作人员。

The workers will then instantiate the RedisClient themselves - using their context.system . 然后,工作人员将使用自己的context.system实例化RedisClient自己。

case class Worker
(
  awsBucket : String,
  gapMinValueMicroSec : Long,
  replyTo : ActorRef,
  redisMaster: RedisServer,
  redisSlaves: Seq[RedisServer],
  mongoURI : String,
  mongoDBName : String,
  mongoCollectioName : String
) extends Actor with ActorLogging {

  val masterSlaveClient = ??? //create from the RedisServer details

}

This will allow each worker to put up their own connection with the redis cluster. 这将允许每个工作人员与Redis集群建立自己的连接。

Alternatively, if you want to connect only once in your master and share the connection to your workers, you need to pass around the RedisClientActor ( source here) that embeds your connection. 另外,如果您只想在主服务器中连接一次并与您的工作人员共享连接,则需要传递嵌入您的连接的RedisClientActor (此处为 )。 This is an ActorRef and can be shared remotely. 这是一个ActorRef ,可以远程共享。

This can be obtained by calling client.redisConnection . 这可以通过调用client.redisConnection获得。

The workers can then build an ActorRequest around it, for example 然后,工作人员可以围绕它构建一个ActorRequest ,例如

case class Worker
    (
      awsBucket : String,
      gapMinValueMicroSec : Long,
      replyTo : ActorRef,
      redisConnection: ActorRef,
      mongoURI : String,
      mongoDBName : String,
      mongoCollectioName : String
    ) extends Actor with ActorLogging with ActorRequest {

      // you will need to implement the execution context that ActorRequest needs as well..

      send(redisCommand)

    }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM