简体   繁体   English

如何使一个块感知的执行上下文?

[英]How do I make a block aware execution context?

For some reason I can't wrap my head around implementing this. 由于某种原因,我无法全神贯注于实现此目标。 I've got an application running with Play that calls out to Elastic Search . 我有一个运行Play的应用程序,该应用程序调用Elastic Search As part of my design, my service uses the Java API wrapped with scala future's as shown in this blog post . 作为我设计的一部分,我的服务使用了包裹有scala future的Java API,如本博文所示。 I've updated the code from that post to hint to the ExecutionContext that it will be doing some blocking I/O like so: 我已经更新了该文章中的代码,以提示ExecutionContext它将执行一些阻塞的I / O,如下所示:

    import scala.concurent.{blocking, Future, Promise}
    import org.elasticsearch.action.{ActionRequestBuilder, ActionListener, ActionResponse }
    def execute[RB <: ActionRequestBuilder[_, T, _, _]](request: RB): Future[T] = {
        blocking {
            request.execute(this)
            promise.future
        }
    }

My actual service that constructs the queries to send to ES takes an executionContext as a constructor parameter that it then uses for calls to elastic search. 我构建查询以发送给ES的实际服务将执行上下文作为构造函数参数,然后将其用于对弹性搜索的调用。 I did this so that the global execution context that play uses won't have it's threads tied down by the blocking calls to ES. 我这样做是为了使玩游戏使用的全局执行上下文不会因为对ES的阻塞调用而束缚其线程。 This SO comment mentions that only the global context is blocking aware, so that leaves me to have to create my own. 这样的SO评论提到,只有全局上下文才阻止感知,因此我不得不创建自己的全局上下文。 In that same post/answer there's a lot of information about using a ForkJoin pool, but I'm not sure how to take what's written in those docs and combine it with the hints in the blocking documentation to create an execution context that responds to blocking hints. 在相同的帖子/答案中,有很多有关使用ForkJoin池的信息,但是我不确定如何将这些文档中写的内容与阻塞文档中提示结合使用,以创建一个响应阻塞的执行上下文提示。

I think one of the issues I have is that I'm not sure exactly how to respond to the blocking context in the first place? 我认为我面临的一个问题是,我不确定一开始如何确切地响应阻塞上下文? I was reading the best practices and the example it uses is an unbounded cache of threads: 我正在阅读最佳实践 ,它使用的示例是线程的无限缓存:

Note that here I prefer to use an unbounded "cached thread-pool", so it doesn't have a limit. 请注意,这里我更喜欢使用无界的“缓存线程池”,因此它没有限制。 When doing blocking I/O the idea is that you've got to have enough threads that you can block. 执行阻塞I / O时,您的想法是您必须具有足以阻塞的线程。 But if unbounded is too much, depending on use-case, you can later fine-tune it, the idea with this sample being that you get the ball rolling. 但是,如果无边界太多,则取决于用例,您可以稍后对其进行微调,此示例的想法是使球滚动。

So does this mean that with my ForkJoin backed thread pool, that I should try to use a cached thread when dealing with non-blocking I/O and create a new thread for blocking IO? 那么,这是否意味着在支持ForkJoin的线程池中,在处理非阻塞I / O时应该尝试使用缓存的线程,并创建一个用于阻塞IO的新线程? Or something else? 或者是其他东西? Pretty much every resource I find online about using seperate thread pools tends to do what the Neophytes guide does , which is to say: 我在网上找到的有关使用独立线程池的几乎所有资源都倾向于做Neophytes指南的工作 ,也就是说:

How to tune your various thread pools is highly dependent on your individual application and beyond the scope of this article. 如何调整各种线程池在很大程度上取决于您的单个应用程序,这超出了本文的范围。

I know it depends on your application, but in this case if I just want to create some type of blocking aware ExecutionContext and understand a decent strategy for managing the threads. 我知道这取决于您的应用程序,但是在这种情况下,如果我只想创建某种类型的阻止感知的ExecutionContext并了解一种用于管理线程的不错的策略。 If the Context is specifically for a single part of the application, should I just make a fixed thread pool size and not use/ignore the blocking keyword in the first place? 如果上下文专门用于应用程序的单个部分,我是否应该仅设置固定的线程池大小,而不首先使用/忽略blocking关键字?

I tend to ramble, so I'll try to break down what I'm looking for in an answer: 我倾向于四处逛逛,因此我将尝试在答案中分解我要寻找的东西:

  1. Code! 码! Reading all these docs still leave me like I'm feeling just out of reach of being able to code a blocking-aware context, and I'd really appreciate an example. 阅读所有这些文档仍然让我感到无法编写能够感知阻塞的上下文,我真的很感激一个例子。
  2. Any links or tips on how to handle blocking threads, ie make a new thread for them endlessly, check the number of threads available and reject if too many, some other strategy 有关如何处理阻塞线程的任何链接或技巧,即无休止地为它们创建新线程,检查可用线程数,并拒绝是否有太多其他策略
  3. I'm not looking for performance tips here, I know I'll only get that with testing, but I can't test if I can't figure out how to code the context's in the first place! 我不是在这里寻找性能提示,我知道只能通过测试来了解它,但是我无法测试是否一开始就不知道如何编写上下文代码! I did find an example of ForkJoins vs threadpools but I'm missing the crucial part about blocking there. 我确实找到了ForkJoins与线程池的示例,但我缺少有关在那里blocking的关键部分。

Sorry for the long question here, I'm just trying to give you a sense of what I'm looking at and that I have been trying to wrap my head around this for over a day and need some outside help. 抱歉,这里有一个很长的问题,我只是想让您了解我在看什么,而且我已经花了整整一天的时间来解决这个问题,并且需要一些外部帮助。


Edit: Just to make this clear, the ElasticSearch Service's constructor signature is: 编辑:为了清楚起见,ElasticSearch Service的构造函数签名为:

//Note that these are not implicit parameters!
class ElasticSearchService(otherParams ..., val executionContext: ExecutionContext)

And in my application start up code I have something like this: 在我的应用程序启动代码中,我有类似以下内容:

object Global extends GlobalSettings {
    val elasticSearchContext = //Custom Context goes here
    ...
    val elasticSearchService = new ElasticSearchService(params, elasticSearchContext);
    ...
}

I am also reading through Play's recommendations for contexts , but have yet to see anything about blocking hints yet and I suspect I might have to go look into the source to see if they extend the BlockContext trait. 我还阅读了Play针对上下文的建议 ,但尚未了解有关阻止提示的任何信息,并且我怀疑我可能不得不去研究源代码以查看它们是否扩展了BlockContext特性。

So I dug into the documentation and Play's best practices for the situation I'm dealing with is to 因此,我针对这种情况研究了文档和Play的最佳做法

In certain circumstances, you may wish to dispatch work to other thread pools. 在某些情况下,您可能希望将工作分派到其他线程池。 This may include CPU heavy work, or IO work, such as database access. 这可能包括CPU繁重的工作或IO工作,例如数据库访问。 To do this, you should first create a thread pool, this can be done easily in Scala: 为此,您应该首先创建一个线程池,这可以在Scala中轻松完成:

And provides some code: 并提供一些代码:

object Contexts {
    implicit val myExecutionContext: ExecutionContext = Akka.system.dispatchers.lookup("my-context")
}

The context is from Akka, so I ran down there searching for the defaults and types of Contexts they offer, which eventually led me to the documentation on dispatchers . 上下文来自Akka,所以我在那儿搜索了它们提供的上下文的默认值和类型,最终使我找到了有关调度程序文档 The default is a ForkJoinPool whose default method for managing a block is to call the managedBlock(blocker) . 默认值为ForkJoinPool,其管理块的默认方法是调用managedBlock(blocker) This led me to reading the documentation that stated: 这使我阅读了说明以下内容的文档:

Blocks in accord with the given blocker. 符合给定阻止者的阻止。 If the current thread is a ForkJoinWorkerThread, this method possibly arranges for a spare thread to be activated if necessary to ensure sufficient parallelism while the current thread is blocked. 如果当前线程是ForkJoinWorkerThread,则此方法可能安排在必要时激活备用线程,以确保在阻塞当前线程时有足够的并行度。

So it seems like if I have a ForkJoinWorkerThread then the behavior I think I want will take place. 如此看来,如果我有一个ForkJoinWorkerThread那我想我想要的行为就会发生。 Looking at the source of ForkJoinPool some more I noted that the default thread factory is: 再看一下ForkJoinPool的来源,我注意到默认的线程工厂是:

val defaultForkJoinWorkerThreadFactory: ForkJoinWorkerThreadFactory = juc.ForkJoinPool.defaultForkJoinWorkerThreadFactory

Which implies to me that if I use the defaults in Akka, that I'll get a context which handles blocking in the way I expect. 这对我来说意味着,如果我在Akka中使用默认值,那我将获得一个上下文,该上下文以我期望的方式处理阻塞。

So reading the Akka documentation again it would seem that specifying my context something like this: 因此,再次阅读Akka文档似乎在指定我的上下文,如下所示:

my-context {
  type = Dispatcher
  executor = "fork-join-executor"
  fork-join-executor {
    parallelism-min = 8
    parallelism-factor = 3.0
    parallelism-max = 64
    task-peeking-mode = "FIFO"
  }
  throughput = 100
}

would be what I want. 就是我想要的

While I was searching in the source code I did some looking for uses of blocking or of calling managedBlock and found an example of overriding the ForkJoin behavior in ThreadPoolBuilder 当我在源代码中搜索时,我做了一些寻找blocking用途或调用managedBlock并发现了一个在ThreadPoolBuilder中覆盖ForkJoin行为的示例。

private[akka] class AkkaForkJoinWorkerThread(_pool: ForkJoinPool) extends ForkJoinWorkerThread(_pool) with BlockContext {
    override def blockOn[T](thunk: ⇒ T)(implicit permission: CanAwait): T = {
      val result = new AtomicReference[Option[T]](None)
      ForkJoinPool.managedBlock(new ForkJoinPool.ManagedBlocker {
        def block(): Boolean = {
          result.set(Some(thunk))
          true
        }
        def isReleasable = result.get.isDefined
      })
      result.get.get // Exception intended if None
    }
  }

Which seems like what I originally asked for as an example of how to make something that implements the BlockContext. 似乎是我最初要求的示例,以作为如何制作实现BlockContext的东西的示例。 That file also has code showing how to make an ExecutorServiceFactory, which is what I believe is reference by the executor part of the configuration. 该文件还具有显示如何制作ExecutorServiceFactory的代码,我相信这是配置的executor部分所引用的。 So I think what I would do if I wanted to have a totally custom context would be extend some type of WorkerThread and write my own ExecutorServiceFactory that uses the custom workerthread and then specify the fully qualified class name in the property like this post advises . 因此,我想如果我想拥有一个完全自定义的上下文,我将做的事情是扩展某种类型的WorkerThread并编写使用该自定义workerthread的我自己的ExecutorServiceFactory,然后在该属性中指定完全限定的类名,如本文所建议的

I'm probably going to go with using Akka's forkjoin :) 我可能会使用Akka的forkjoin :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM