简体   繁体   English

演员“队列”?

[英]An Actor “queue”?

In Java, to write a library that makes requests to a server, I usually implement some sort of dispatcher (not unlike the one found here in the Twitter4J library: http://github.com/yusuke/twitter4j/blob/master/twitter4j-core/src/main/java/twitter4j/internal/async/DispatcherImpl.java ) to limit the number of connections, to perform asynchronous tasks, etc. 在Java中,要编写一个向服务器发出请求的库,我通常会实现某种调度程序(与Twitter4J库中的调度程序不同: http ://github.com/yusuke/twitter4j/blob/master/twitter4j -core / src / main / java / twitter4j / internal / async / DispatcherImpl.java )限制连接数,执行异步任务等。

The idea is that N number of threads are created. 这个想法是创建了N个线程。 A "Task" is queued and all threads are notified, and one of the threads, when it's ready, will pop an item from the queue, do the work, and then return to a waiting state. “任务”排队并通知所有线程,其中一个线程在准备就绪时,将从队列中弹出一个项目,完成工作,然后返回等待状态。 If all the threads are busy working on a Task, then the Task is just queued, and the next available thread will take it. 如果所有线程都忙于处理任务,那么Task就会排队,下一个可用的线程将接受它。

This keeps the max number of connections to N, and allows at most N Tasks to be operating at the same time. 这将最大连接数保持为N,并允许最多N个任务同时运行。

I'm wondering what kind of system I can create with Actors that will accomplish the same thing? 我想知道我可以用Actors创建什么样的系统来完成同样的事情? Is there a way to have N number of Actors, and when a new message is ready, pass it off to an Actor to handle it - and if all Actors are busy, just queue the message? 有没有办法拥有N个Actors,当一条新消息准备就绪时,将它传递给一个Actor来处理它 - 如果所有Actors都忙,那么只需排队消息?

Akka Framework is designed to solve this kind of problems, and is exactly what you're looking for. Akka Framework旨在解决这类问题,正是您正在寻找的。

Look thru this docu - there're lots of highly configurable dispathers (event-based, thread-based, load-balanced, work-stealing, etc.) that manage actors mailboxes, and allow them to work in conjunction. 通过这个文档来看 - 有很多高度可配置的调度程序(基于事件,基于线程,负载平衡,工作窃取等)管理actor邮箱,并允许它们一起工作。 You may also find interesting this blog post . 你也可以在这篇博文中找到有趣的内容

Eg this code instantiates new Work Stealing Dispatcher based on the fixed thread pool, that fulfils load balancing among the actors it supervises: 例如,此代码基于固定的线程池实例化新的工作窃取调度程序,它实现了它监督的参与者之间的负载平衡:

  val workStealingDispatcher = Dispatchers.newExecutorBasedEventDrivenWorkStealingDispatcher("pooled-dispatcher")
  workStealingDispatcher
  .withNewThreadPoolWithLinkedBlockingQueueWithUnboundedCapacity
  .setCorePoolSize(16)
  .buildThreadPool

Actor that uses the dispatcher: 使用调度程序的Actor:

class MyActor extends Actor {

    messageDispatcher = workStealingDispatcher

    def receive = {
      case _ =>
    }
  }

Now, if you start 2+ instances of the actor, dispatcher will balance the load between the mailboxes (queues) of the actors (actor that has too much messages in the mailbox will "donate" some to the actors that has nothing to do). 现在,如果您启动了2个以上的actor实例,调度程序将平衡actor的邮箱(队列)之间的负载(在邮箱中有太多邮件的actor将“捐赠”一些给没有任何事情的actor) 。

Well, you have to see about the actors scheduler, as actors are not usually 1-to-1 with threads. 好吧,你必须看看演员调度器,因为演员通常不会与线程一对一。 The idea behind actors is that you may have many of them, but the actual number of threads will be limited to something reasonable. 演员背后的想法是你可能有很多,但线程的实际数量将被限制在合理的范围内。 They are not supposed to be long running either, but rather quickly answering to messages they receive. 它们也不应该长时间运行,而是快速回复它们收到的消息。 In short, the architecture of that code seems to be wholly at odds with how one would design an actor system. 简而言之,该代码的架构似乎与设计演员系统的方式完全不一致。

Still, each working actor may send a message to a Queue actor asking for the next task, and then loop back to react. 尽管如此,每个工作的演员都可以向Queue actor发送消息,要求下一个任务,然后循环回来做出反应。 This Queue actor would receive either queueing messages, or dequeuing messages. 此队列角色将接收排队消息或出列消息。 It could be designed like this: 它可以这样设计:

val q: Queue[AnyRef] = new Queue[AnyRef]
loop {
  react {
    case Enqueue(d) => q enqueue d
    case Dequeue(a) if q.nonEmpty => a ! (q dequeue)
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM