简体   繁体   English

从Java并发迁移到Scala并发

[英]Migrating from Java concurrency to Scala concurrency

I have a fairly standard mechanism in Java for solving the problem: 我有一个相当标准的Java机制来解决这个问题:

  • Work items must be scheduled to execute at a particular time 必须安排工作项在特定时间执行
  • Each work item must then wait on a condition becoming true 然后,每个工作项必须等待条件成为真
  • Work items should be cancellable 工作项应该可以取消

The solution I use is as follows: 我使用的解决方案如下:

  1. Have a single-threaded scheduler to schedule my work items 有一个单线程调度程序来安排我的工作项
  2. Have an ExecutorService (which may be multi-threaded) 有一个ExecutorService (可能是多线程的)
  3. Each scheduled work item then submits the actual work to the ExecutorService . 然后,每个计划的工作项将实际工作提交给ExecutorService The returned Future is cached in a map. 返回的Future将缓存在地图中。 A completion service is used to remove the future from the cache when the work is completed 完成服务用于在工作完成时从缓存中删除未来
  4. Items can be cancelled via the cached futures 可以通过缓存的期货取消商品

Of course, my executor needs to be at least as big as the number of blocking work items I expect to have but this is not a problem in practice. 当然,我的执行者需要至少与我期望的阻塞工作项的数量一样大,但这在实践中不是问题。

So now I'm coding in Scala and using the actor framework. 所以现在我在Scala中编码并使用actor框架。 Assuming that my work item can be encapsulated in an event sent to an actor: 假设我的工作项可以封装在发送给actor的事件中:

  1. What mechanism would I use to schedule a work item for a specific time? 我将使用什么机制来安排特定时间的工作项?
  2. If a work item is an event sent to an actor, how can I ensure that the backing thread pool is bigger than the number of items that can be blocking at the same time 如果工作项是发送给actor的事件,我如何确保支持线程池大于可以同时阻塞的项数
  3. How can I cause a previously scheduled work item to be cancelled? 如何取消之前安排的工作项?

What mechanism would I use to schedule a work item for a specific time? 我将使用什么机制来安排特定时间的工作项?

I would use a java.util.concurrent.ScheduledExecutorService. 我会使用java.util.concurrent.ScheduledExecutorService。

If a work item is an event sent to an actor, how can I ensure that the backing thread pool is bigger than the number of items that can be blocking at the same time 如果工作项是发送给actor的事件,我如何确保支持线程池大于可以同时阻塞的项数

This strikes me as a design that defeats the effort of parallelisation. 这让我觉得这是一种破坏并行化努力的设计。 Try to minimise or eliminate blocking and global state. 尽量减少或消除阻塞和全局状态。 These are barriers to composability and scalability. 这些是可组合性和可伸缩性的障碍。 For example, consider having a single dedicated thread that waits for files to arrive and then fires events off to actors. 例如,考虑使用一个等待文件到达的专用线程,然后将事件发送给actor。 Or look at java.nio for asynchronous non-blocking I/O. 或者查看java.nio以获取异步非阻塞I / O.

I don't fully understand your requirements here, but it seems like you could have a single thread/actor looking for I/O events. 我在这里并不完全理解你的要求,但似乎你可以让一个线程/角色寻找I / O事件。 Then as your scheduled "work items", schedule effects that create non-blocking actors. 然后作为计划的“工作项”,安排创建非阻塞演员的效果。 Have those actors register themselves with the I/O thread/actor to receive messages about I/O events that they care about. 让这些参与者向I / O线程/ actor注册自己,以接收有关他们关心的I / O事件的消息。

How can I cause a previously scheduled work item to be cancelled? 如何取消之前安排的工作项?

ScheduledExecutorService returns Futures. ScheduledExecutorService返回期货。 What you have is not a bad design in that regard. 在这方面,你所拥有的并不是一个糟糕的设计。 Collect them in a Map and call future.cancel(). 在Map中收集它们并调用future.cancel()。

You could have a scheduling actor that has a list of scheduled actors, and uses Actor.receiveWithin() to wake up every second or so and send messages to actors that are ready to be executed. 您可以拥有一个具有已调度actor列表的调度actor,并使用Actor.receiveWithin()每秒唤醒一次,并向准备执行的actor发送消息。 The scheduling actor could also handle cancelling. 调度演员也可以处理取消。 Another option is to let every actor handle its own scheduling directly with receiveWithin(), instead of centralizing scheduling. 另一个选择是让每个actor直接使用receiveWithin()处理自己的调度,而不是集中调度。

There is some discussion on this issue in the blog post Simple cron like scheduler in Scala . 有关此问题的讨论在Scala中的简单cron(如调度程序)中发表

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM