简体   繁体   English

Java 顺序代码的 CompletableFuture

[英]Java CompletableFuture for sequential code

My new team is writing a Java gRPC service and to ensure that we never block the request thread we ended-up wrapping more or less ALL methods inside a CompletableFuture even if those endpoints are conceptually a sequential list of operation (no parallelism).我的新团队正在编写一个 Java gRPC 服务,为了确保我们永远不会阻塞请求线程,我们最终将或多或少的所有方法包装在CompletableFuture中,即使这些端点在概念上是顺序操作列表(无并行性)。

So the code look something like (a Java example is available at the end if needed):所以代码看起来像(如果需要,最后可以使用 Java 示例):

  methodA()
    methodB()
      methodD() (let say this one is a 15ms RPC call)
      methodE()
    methodC()
      methodF() (let say this one is a 5ms CPU intensive work)
      methodG()
 

Context:语境:

  • In practice the application is much bigger and there're many more layers of functions实际上,应用程序要大得多,并且有更多的功能层
  • Each application host need to handle 1000 QPS, so you can imagine that methodA is called at that rate每个应用程序主机需要处理 1000 QPS,因此您可以想象以该速率调用 methodA
  • Some function (few) make a RPC call that can take 5-30ms (IO)一些 function(少数)进行 RPC 调用可能需要 5-30 毫秒(IO)
  • Some function (very few) run CPU intensive work (< 5ms)一些 function(很少)运行 CPU 密集型工作(< 5 毫秒)

Edit 1: After more reading online yesterday, I understand that if, and only if, we are using true non-blocking HTTP and DB Client (and it doesn't seem like JDBC is non-blocking) , this pattern can reduce the total number of threads required.编辑 1:昨天在网上阅读更多内容后,我了解到,如果且仅当我们使用真正的非阻塞 HTTP 和 DB Client(而且它看起来不像 JDBC 是非阻塞的) ,这种模式可以减少总数需要的线程数。 My understanding is that if we have enough memory to keep one thread per request, using a synchronous code would still probably be the most efficient implementation (reduce the overhead of switching threads and loading data), but if we didn't have enough memory to keep that many threads alive, then this notion of making the whole code non-blocking can reduce the number of thread and thus allow the application to scale to more request.我的理解是,如果我们有足够的 memory 来为每个请求保留一个线程,那么使用同步代码可能仍然是最有效的实现(减少切换线程和加载数据的开销),但是如果我们没有足够的 memory 来保持那么多线程处于活动状态,那么这种使整个代码成为非阻塞的概念可以减少线程的数量,从而允许应用程序扩展到更多的请求。

Question 1: I understand this unblocks the "request thread", but in practice what's the advantage?问题 1:我知道这会解除对“请求线程”的阻塞,但实际上有什么好处? Are we truly saving CPU time?我们真的在节省 CPU 时间吗? In the example below, it feels like "some" thread will be alive the whole time anyways (in the example below, mostly the thread from CompletableFuture.supplyAsync in methodD), it just happens that it's not the same thread as the one who received the initial request.在下面的示例中,感觉“某个”线程无论如何都会一直处于活动状态(在下面的示例中,主要是 methodD 中来自 CompletableFuture.supplyAsync 的线程),只是碰巧它与接收到的线程不同最初的要求。

Question 2: Is that pattern truly a "best practice" and all services should follow a similar pattern?问题 2:该模式是否真的是“最佳实践”并且所有服务都应遵循类似的模式? Outside of making the code a bit harder to read I feel, per request 50+ methods gets invoked and 50+ times we call a mix of CompletableFuture .thenCompose() or .supplyAsync .除了使代码更难阅读之外,我觉得每个请求都会调用 50 多个方法,并且我们调用了 50 多次 CompletableFuture .thenCompose().supplyAsync的混合。 It seems like it's would be adding some overhead.似乎会增加一些开销。 Was CompletableFuture designed to be used that way across the whole code base in every method? CompletableFuture是否设计为在每个方法的整个代码库中以这种方式使用?

Annex (java example):附件(java例子):

  public void myEndpoint(MyRequest request, StreamObserver<MyResponse> responseObserver) {
    methodA(10)
        .thenApply((response) -> responseObserver.next(response));
    
  }

  public CompletableFuture<Integer> methodA(Integer input) {
    return CompletableFuture.completedFuture(input)
        .thenCompose(this::methodB)
        .thenCompose(this::methodC)
        .thenApply((i) -> {
          System.out.println("MethodA executed by ".concat(Thread.currentThread().getName() + ": " + i));
          return i;
        });
  }

  public CompletableFuture<Integer> methodB(Integer input) {
    return CompletableFuture.completedFuture(input)
        .thenCompose(this::methodD)
        .thenCompose(this::methodE)
        .thenApply((i) -> {
          System.out.println("MethodB executed by ".concat(Thread.currentThread().getName() + ": " + i));
          return i;
        });
  }

  public CompletableFuture<Integer> methodC(Integer input) {
    return CompletableFuture.completedFuture(input)
        .thenCompose(this::methodF)
        .thenCompose(this::methodG)
        .thenApply((i) -> {
          System.out.println("MethodC executed by ".concat(Thread.currentThread().getName() + ": " + i));
          return i;
        });
  }

  public CompletableFuture<Integer> methodD(Integer input) {
    return CompletableFuture.supplyAsync(() -> {
      try {
        // Assume it's a RPC call that takes 5-30ms
        Thread.sleep(20);
        System.out.println("MethodD executed by ".concat(Thread.currentThread().getName() + ": " + input));
      } catch (InterruptedException e) {
        throw new RuntimeException(e);
      }
      return input + 1;
    });
  }

  public CompletableFuture<Integer> methodE(Integer input) {
    return CompletableFuture.supplyAsync(() -> {
      System.out.println("MethodE executed by ".concat(Thread.currentThread().getName() + ": " + input));
      return input + 1;
    });
  }

  public CompletableFuture<Integer> methodF(Integer input) {
    return CompletableFuture.supplyAsync(() -> {
      try {
        // Let's assume it's a CPU intensive work that takes 2-5ms
        Thread.sleep(5);
        System.out.println("MethodF executed by ".concat(Thread.currentThread().getName() + ": " + input));
      } catch (InterruptedException e) {
        throw new RuntimeException(e);
      }
      return input + 1;
    });
  }

  public CompletableFuture<Integer> methodG(Integer input) {
    return CompletableFuture.supplyAsync(() -> {
      System.out.println("MethodG executed by ".concat(Thread.currentThread().getName() + ": " + input));
      return input + 1;
    });
  }

The premise is that threads are a scarce resource, which is not intrinsic to threads but a consequence of using a pool of threads with a configured maximum.前提是线程是一种稀缺资源,这不是线程固有的,而是使用具有配置最大值的线程池的结果。 The reason today's frameworks use a pool is that threads, as implemented today, are expensive and creating too many of them can cause significant performance problems.今天的框架使用池的原因是,今天实现的线程非常昂贵,而且创建太多线程会导致严重的性能问题。

You wrote你写了

My understanding is that if we have enough memory to keep one thread per request, using a synchronous code would still probably be the most efficient implementation…我的理解是,如果我们有足够的 memory 来为每个请求保留一个线程,那么使用同步代码可能仍然是最有效的实现方式……

which is going into the right direction, but it's important to keep in mind that there might be more constraints than memory. Some operating system's schedulers become significantly less efficient with a large number of threads, some may even have a fixed limit on how many threads a process is allowed to create.这是正确的方向,但重要的是要记住,可能有比 memory 更多的限制。一些操作系统的调度器在线程数量多的情况下效率显着降低,有些甚至可能对线程数有固定限制允许创建一个进程。

So, when you block a thread by waiting for another, you are limiting the parallel processing capabilities of the thread pool.因此,当您通过等待另一个线程来阻塞一个线程时,您就是在限制线程池的并行处理能力。 This applies if you are using, as you put it, a “true non-blocking” API, or just any already existing API that returns futures.如果您使用的是“真正的非阻塞”API,或者任何已经存在的返回期货的 API,这适用。 Submitting your own operations via supplyAsync has no point as the supplier's code still is executed by a thread, as you correctly pointed out.正如您正确指出的那样,通过supplyAsync提交您自己的操作毫无意义,因为供应商的代码仍然由线程执行。

But when you have an existing future returned by an operation, you should rather chain dependent processing steps instead of waiting for its result via join and friends.但是当您有一个操作返回的现有未来时,您应该链接依赖的处理步骤,而不是通过join和 friends 等待其结果。 Note that calling join() on existing futures can make things even worse than just blocking threads:请注意,在现有期货上调用join()会使事情变得比仅仅阻塞线程更糟糕:

When you call join() on a CompletableFuture , it tries to compensate the problem.当您在CompletableFuture上调用join()时,它会尝试弥补这个问题。 When the caller is a worker thread of a Fork/Join pool, one of two things can happen:当调用者是 Fork/Join 池的工作线程时,可能会发生以下两种情况之一:

  • Instead of doing nothing, it may try to fetch pending jobs and execute them in-place, similar to awaitQuiescence .与其什么都不做,它可能会尝试获取挂起的作业并就地执行它们,类似于awaitQuiescence
    • In the best case, it will directly pick up the job you just scheduled with supplyAsync (if using the same pool) and execute it, almost as if you executed it without CompletableFuture (just consuming far more stack space).在最好的情况下,它将直接获取您刚刚使用supplyAsync安排的作业(如果使用相同的池)并执行它,几乎就像您在没有CompletableFuture的情况下执行它一样(只是消耗更多的堆栈空间)。
    • In the worst case, the thread will be busy executing a long running, entirely unrelated job while the job it's actually waiting for has been completed long ago.在最坏的情况下,线程将忙于执行一个长时间运行的、完全不相关的作业,而它实际等待的作业早已完成。 Imagine what happens if that unrelated job also calls join .想象一下,如果那个不相关的工作也调用join会发生什么。
  • It may end up actually blocking the thread but using ForkJoinPool.managedBlock(…) , which may start a new worker thread to ensure that the pool's configured parallelism is kept.它可能最终实际上阻塞线程但使用ForkJoinPool.managedBlock(…) ,这可能会启动一个新的工作线程以确保池的配置并行性得以保持。 Great to solve the problem of reduced parallelism, but on the other hand, reintroducing the very problem of resource consumption you actually wanted to solve with thread pools.很好地解决了减少并行度的问题,但另一方面,重新引入了您实际上想用线程池解决的资源消耗问题。

The worst of all is that you can't even predict which of the two things will happen.最糟糕的是,您甚至无法预测这两种情况中的哪一种会发生。


There are, however, cases where not blocking a request thread by utilizing other threads has a point.但是,在某些情况下,不通过利用其他线程来阻塞请求线程是有道理的。 Most notably when the response time for the request itself matters and the results of the background computation are delivered independent of the initial response.最值得注意的是,当请求本身的响应时间很重要并且后台计算的结果独立于初始响应交付时。 The most prominent example of this pattern is the event dispatch thread of GUI frameworks which must be kept free of long running operations, to be able to process subsequent user input.这种模式最突出的例子是 GUI 框架的事件分发线程,它必须避免长时间运行的操作,以便能够处理后续的用户输入。


Note that there is a general solution on the way, to make 99% of all future chains obsolete.请注意,有一个通用的解决方案,可以让 99% 的未来链都过时。 Virtual Threads , which are in preview state in JDK 19, are cheap to create and allow to create one thread per request, just like you envisioned in the cite above. JDK 19 中的预览版 state 中的虚拟线程创建起来很便宜,并且允许为每个请求创建一个线程,就像您在上面的引用中所设想的那样。 When a virtual thread gets blocked, it will release the underlying platform thread for the next virtual thread, so there is no reason to hesitate to call join() on any future, even those belonging to “true non-blocking” APIs.当一个虚拟线程被阻塞时,它会为下一个虚拟线程释放底层平台线程,所以没有理由犹豫在任何未来调用join() ,即使是那些属于“真正的非阻塞”API 的。

The best way to interoperate with this concept and the status quo is to design methods to not return futures, but perform the operation in-place.与这个概念和现状互操作的最好方法是设计返回期货的方法,而是就地执行操作。 It's still possible to design a future chain when necessary, ie by using .thenApplyAsync(this::inPlaceEvalMethod) instead of .thenCompose(this::futureReturningMethod) .仍然可以在必要时设计未来链,即使用.thenApplyAsync(this::inPlaceEvalMethod)而不是.thenCompose(this::futureReturningMethod) But at the same time, you can write a plain sequential version just calling these methods, which can be executed by a virtual thread.但与此同时,您可以编写一个只调用这些方法的普通顺序版本,它可以由虚拟线程执行。 In fact, you could even add the plain sequential version today and benchmark both approaches.事实上,您现在甚至可以添加简单的顺序版本并对这两种方法进行基准测试。 The results might convince your team members that “not blocking the request thread” is not necessarily an improvement.结果可能会让您的团队成员相信“不阻塞请求线程”不一定是一种改进。

On the first question: there is no need to wrap all intermediate calls into CompleteableFuture s if these are all sequential.关于第一个问题:如果所有中间调用都是顺序的,则无需将所有中间调用包装到CompleteableFuture中。 You can as well wrap the chain of sequential calls into one single CompleteableFuture :您也可以将顺序调用链包装到一个单独的CompleteableFuture中:

public void myEndpoint() throws ExecutionException, InterruptedException {
    CompletableFuture.supplyAsync(() -> methodA(10))
        .thenApply((response) -> responseObserver.next(response));
}

public int methodA(Integer input) {
    var i = methodC((methodB(input)));
    System.out.println("MethodA executed by ".concat(Thread.currentThread().getName() + ": " + i));
    return i;
}

public int methodB(Integer input) {
    var i = methodE(methodD(input));
    System.out.println("MethodB executed by ".concat(Thread.currentThread().getName() + ": " + i));
    return i;
}

public int methodC(Integer input) {
    var i = methodG(methodF(input));
    System.out.println("MethodC executed by ".concat(Thread.currentThread().getName() + ": " + i));
    return i;
}

public Integer methodD(Integer input) {
    try {
        // Assume it's a RPC call that takes 5-30ms
        Thread.sleep(20);
        System.out.println("MethodD executed by ".concat(Thread.currentThread().getName() + ": " + input));
    } catch (InterruptedException e) {
        throw new RuntimeException(e);
    }
    return input + 1;
}

public int methodE(Integer input) {
    System.out.println("MethodE executed by ".concat(Thread.currentThread().getName() + ": " + input));
    return input + 1;
}

public int methodF(Integer input) {
    try {
        // Let's assume it's a CPU intensive work that takes 2-5ms
        Thread.sleep(5);
        System.out.println("MethodF executed by ".concat(Thread.currentThread().getName() + ": " + input));
    } catch (InterruptedException e) {
        throw new RuntimeException(e);
    }
    return input + 1;
}

public int methodG(Integer input) {
    System.out.println("MethodG executed by ".concat(Thread.currentThread().getName() + ": " + input));
    return input + 1;
}

The result is the same and the main thread gets not blocked.结果是一样的,主线程没有被阻塞。 Since there are far less CompleteableFuture instances, there is less overhead from handing calls over from one thread to another.由于CompleteableFuture实例少得多,因此将调用从一个线程移交给另一个线程的开销更少。

Thus for question 2, no this is not best practices the way your example code is structured.因此对于问题 2,不,这不是示例代码结构方式的最佳实践。 Use CompleteableFuture if you must, avoid it otherwise.如果必须,请使用CompleteableFuture ,否则请避免使用。 For example you need to use CompleteableFuture#thenCompose when you don't have control over the API you are calling (ie you can't change the return type from CompleteableFuture<T> to plain T ).例如,当您无法控制正在调用的 API 时,您需要使用CompleteableFuture#thenCompose (即您无法将返回类型从CompleteableFuture<T>更改为纯T )。 Another case is when you want to take advantage of parallelism.另一种情况是当您想利用并行性时。 But this is not applicable here.但这在这里不适用。

Your question is a little unclear, and I think your comment phrases the real question better.您的问题有点不清楚,我认为您的评论更能说明真正的问题。 Copied and pasted below.复制并粘贴在下面。

In this example nothing is running in parallel, all methods are running sequentially as described in the Java code.在此示例中,没有任何内容是并行运行的,所有方法都按顺序运行,如 Java 代码中所述。 My understanding is that CompletableFuture is used, in our code base, mostly to ensure that we don't block the main thread on IO, but I'm trying to understand what's the advantage of not blocking the request thread.我的理解是,在我们的代码库中使用 CompletableFuture,主要是为了确保我们不会阻塞 IO 上的主线程,但我试图了解不阻塞请求线程的优势是什么。 At the end of the day some thread (MethodD's CompletableFuture) is waiting for the IO, it just happens that it's not the initial request thread.在一天结束时,一些线程(MethodD 的 CompletableFuture)正在等待 IO,它恰好不是初始请求线程。 Do you think CompletableFuture should only be used to achieve parallelism?你认为 CompletableFuture 应该只用于实现并行吗?

Great question.很好的问题。

When you write plain, sequential code with no CompletableFuture<T> , your code runs synchronously on a single thread.当您编写没有CompletableFuture<T>的普通顺序代码时,您的代码将在单个线程上同步运行。 No new threads are made to run your code.没有创建新线程来运行您的代码。 However, when you make a CompletableFuture<T> and put a task on it, a couple of things occur.但是,当您制作CompletableFuture<T>并在其上放置任务时,会发生一些事情。

  • A new thread is created创建了一个新线程
  • The task given to the CompletableFuture<T> is placed onto this new thread分配给CompletableFuture<T>的任务被放置到这个新线程上
  • Java then uses a scheduler to jump back and forth between the main thread and the new thread when doing work. Java然后在做工作的时候使用调度器在主线程和新线程之间来回跳转。
    • Now, if your computer has multiple cores, and the number of cores is larger than the number of threads, then the above may not happen.现在,如果你的计算机有多个核心,并且核心数大于线程数,那么上述情况可能不会发生。 But typically, the number of threads your application uses is way more than 2/4/8, so the point I am making above is almost always true但通常情况下,您的应用程序使用的线程数远远超过 2/4/8,因此我在上面提出的观点几乎总是正确的

As you can see, the 3rd bullet is the most important because this is where the biggest benefit of multithreading occurs.如您所见,第 3 个项目符号是最重要的,因为这是多线程的最大好处所在。 The Java scheduler allows you to pause and continue threads on the fly, so that every thread can make some progress over time. Java 调度程序允许您动态暂停和继续线程,以便每个线程都能随着时间的推移取得一些进展。

This is powerful because some threads may be waiting on IO work to be completed.这很强大,因为一些线程可能正在等待 IO 工作完成。 A thread that is waiting on IO work is essentially doing nothing and wasting its turn on the CPU core.等待 IO 工作的线程实际上什么都不做,并且在浪费 CPU 内核的资源。

By using a Java scheduler, you can minimize (if not eliminate) the time wasted on a core, and quickly switch to a thread that is not waiting on IO work to continue.通过使用 Java 调度程序,您可以最大限度地减少(如果不能消除)浪费在核心上的时间,并快速切换到不等待 IO 工作继续的线程。

And this is probably the big benefit that your teammates are striving for.而这大概就是你的队友们正在努力争取的一大好处。 They want to ensure that all work that is being done wastes as little time as possible on the core.他们希望确保正在完成的所有工作在核心上浪费的时间尽可能少。 That is the big point.这是重点。

That said , whether or not they actually succeeded depends on a couple of different questions that I need you to answer.也就是说,他们是否真的成功取决于我需要你回答的几个不同的问题。

  1. You mentioned methodB and methodC .您提到methodBmethodC Can any of these methods be run in parallel?这些方法中的任何一种都可以并行运行吗? Does methodB have to fully complete before methodC can be executed?在可以执行methodC methodB必须完全完成 Or can methodC run in parallel to methodB ?或者 methodC 可以与methodC并行运行methodB The same question applies to methodD and methodE , or methodF and methodG .同样的问题适用于methodDmethodE ,或methodFmethodG I understand that currently, they run sequentially and wait for each other to finish.我知道目前,他们按顺序运行并等待对方完成。 That's not my question.那不是我的问题。 I am asking if it is possible for them to run in parallel.我在问他们是否有可能并行运行。

  2. Are you using rate limiting tools like Semaphore anywhere in your code?您是否在代码中的任何地方使用了诸如Semaphore之类的速率限制工具? Ideally, I would limit the scope of your answer to explicit code that your team writes, but if you know for sure that one of your dependencies does it, then feel free to mention it too.理想情况下,我会将您的答案中的 scope 限制为您的团队编写的显式代码,但如果您确定您的一个依赖项正在执行此操作,那么也可以随意提及它。

  • If your answers to question 1 is no, then 99% of the time, doing what your team is doing is a terrible idea.如果您对问题 1 的回答是否定的,那么在 99% 的情况下,做您的团队正在做的事情是一个糟糕的主意。 The only method that should be in its own separate thread is methodA , but it sounds like you are already doing that.唯一应该在其自己的单独线程中的方法是methodA ,但听起来您已经在这样做了。
  • If you answer to question 1 is at least partly yes but question 2 is no, then your teammates are pretty much correct.如果您对问题 1 的回答至少部分是肯定的,但问题 2 的回答是否定的,那么您的队友几乎是正确的。 Over time, try to get an idea about where and when it makes the most sense.随着时间的推移,尝试了解最有意义的地点和时间。 But as a first pass solution?但作为第一个通过的解决方案? This isn't a horrible idea.这不是一个可怕的想法。
    • If you said that B and C can be parallel, but D and E cannot, then wrapping B and C in CompletableFuture<T> makes sense, but not for D and E. They should just be basic sequential Java code.如果您说 B 和 C 可以并行,但 D 和 E 不能,那么将 B 和 C 包装在CompletableFuture<T>中是有意义的,但对于 D 和 E 则不然。它们应该只是基本的顺序 Java 代码。 Unless of course, this a modular method/class that can be used in other code and might be parallel there.当然,除非这是一个可以在其他代码中使用并且可能在那里并行的模块化方法/类。 Nuance is required here, but starting with all of them being CompletableFuture<T> isn't a terrible first solution.这里需要细微差别,但从所有这些开始都是CompletableFuture<T>并不是一个糟糕的第一个解决方案。
  • If your answer to question 1 is at least partly yes and your answer to question 2 is also yes, then you'll have to take a deep dive into your profiler to find the answer.如果您对问题 1 的回答至少部分是肯定的,并且您对问题 2 的回答也是肯定的,那么您将必须深入研究您的探查器才能找到答案。 Things like Semaphore are a different type of IO since they are a "context-dependent" tax that you pay depending on the state of your program around you. Semaphore之类的东西是 IO 的不同类型,因为它们是您根据周围程序的 state 支付的“上下文相关”税。 But since they are a construct that exists inside of your code, it becomes a dependable and measurable sort of IO that you can build deductions and assumptions off of.但由于它们是存在于您的代码内部的构造,因此它成为一种可靠且可测量的 IO,您可以从中构建推论和假设。 To keep my answer short, rate-limiting tools will allow you to make dependable assumptions about your code, and thus, any results from your profiler will be way more useful than they would be else where.为了让我的回答简短,速率限制工具将允许您对您的代码做出可靠的假设,因此,您的探查器的任何结果都将比它们在其他地方更有用。 methodA should definitely still be in its own separate thread. methodA绝对应该仍然在它自己的单独线程中。

So in short.简而言之。

  • If 1 and 2 are yes, the answer is going to require nuance.如果 1 和 2 是肯定的,那么答案将需要细微差别。 Go into your profiler and find out. Go 进入你的探查器并找出答案。
  • If 1 is yes but 2 is no, your teammates are right.如果 1 是但 2 不是,你的队友是对的。 Change as needed, but go ahead with this solution.根据需要进行更改,但 go 提前使用此解决方案。
  • If 1 is no, then your teammates are wrong.如果1为否,那么你的队友就错了。 Make them change.让他们改变。

And for all of these, methodA should be in its own separate thread no matter what.对于所有这些, methodA都应该在其自己的单独线程中。

EDIT - the original poster of the question has confirmed that the answer to question 1 and 2 are both no.编辑- 问题的原始发布者已确认问题 1 和 2 的答案是否定的。 Therefore, the team is wrong and should pull back this change.因此,团队错了,应该撤回这个改动。 I will take this opportunity to explain in better detail why their decision is wrong.我会借此机会更详细地解释为什么他们的决定是错误的。

As mentioned before, the big utility behind CompletableFuture<T> and other threading tools is that they allow you to do work on specific threads while other specific threads are waiting on some IO operation to finish.如前所述, CompletableFuture<T>和其他线程工具背后的重要用途是它们允许您在特定线程上工作,而其他特定线程正在等待某些 IO 操作完成。 This is accomplished by switching between threads.这是通过在线程之间切换来实现的。

However, if there is no IO operation being done, then you are not saving time because none of the threads were ever waiting.但是,如果没有执行 IO 操作,那么您就没有节省时间,因为没有任何线程在等待。 So you gain nothing by having CompletableFuture<T> .因此,拥有CompletableFuture<T>将一无所获。 And worse yet, you actually lose performance by doing this.更糟糕的是,这样做实际上会降低性能。

See, when switching between threads like I just mentioned, you have to " page state ", which is the short way of saying " grab all the variables in scope for that thread and load them into main memory, while unloading all the data from the previous thread ".看,当像我刚才提到的那样在线程之间切换时,你必须“页面 state ”,这是说“获取该线程的 scope 中的所有变量并将它们加载到主 memory 中的所有变量,同时从中卸载所有数据上一个线程”。 " Paging state " like this is fast, but it's not instantaneous.像这样的“ Paging state ”速度很快,但不是瞬时的。 It costs you performance to switch threads like that.像那样切换线程会降低性能。

And to make matters worse, your teammates put this on every method .更糟糕的是,你的队友把这个放在每一个方法上。 So not only are they slowing down their code by a nontrivial amount just to pointlessly spin their wheels, but they are doing it extremely frequently, since this de optimization is occurring on every method.因此,他们不仅将他们的代码放慢了相当大的数量,只是为了毫无意义地转动他们的轮子,而且他们这样做非常频繁,因为这种优化发生在每个方法上。

I would confront your team immediately and point out how damaging this is.我会立即与您的团队对质并指出这是多么具有破坏性。 They are explicitly in the wrong for doing this, and even if they were preparing for some inevitable future, this is still a terrible time to implement this.他们这样做显然是错误的,即使他们正在为不可避免的未来做准备,现在仍然是实施此计划的糟糕时机。 Wait until that time comes and build it out as the need arises.等到那个时候到来,并在需要时建造它。 As is now, they are gutting their performance for no good reason.就像现在一样,他们无缘无故地破坏了他们的表现。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM