简体   繁体   English

GraphQL 解析器在 okhttp 线程池中运行,这与根解析器线程池不同

[英]GraphQL resolvers are running in okhttp threadpool which is different from root resolvers threadpool

GraphqlRootResolver are running in a user defined threadpool. GraphqlRootResolver 在用户定义的线程池中运行。 But GraphqlResolvers are running in OkHttp Thread Pool.但是 GraphqlResolvers 是在 OkHttp 线程池中运行的。 The Mdc context is removed for http threadpool.为 http 线程池删除了 Mdc 上下文。 So My Questions are所以我的问题是

  1. When GraphQL Query Resolver are Running in user defined threadpool why field resolvers are n't running in that?当 GraphQL 查询解析器在用户定义的线程池中运行时,为什么字段解析器没有在其中运行?
  2. If it is the expected behaviour how can we copy the context to those thread pools?如果这是预期的行为,我们如何将上下文复制到那些线程池?

My User defined Threadpool Looks Like this我的用户定义的线程池看起来像这样

@Bean(name = "graphqlAsyncTaskExecutor")
    public Executor newExecutor() {
        var executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(Runtime.getRuntime().availableProcessors());
        executor.setMaxPoolSize(Runtime.getRuntime().availableProcessors());
        executor.setKeepAliveSeconds(0);
        executor.initialize();
        executor.setThreadNamePrefix("hub-orch");
        return new DelegatingSecurityContextAsyncTaskExecutor(executor);
    }

MdcTaskDecorator code MdcTaskDecorator 代码

public class MdcContextTaskDecorator implements AsyncTaskDecorator, TaskDecorator {

    /**
     * Propagate the current thread's MDC context to the target thread.
     */
    @Override
    public Runnable decorate(Runnable runnable) {
        var mdcContext = MDC.getCopyOfContextMap();
        return () -> {
            try {
                MDC.setContextMap(mdcContext);
                runnable.run();
            } finally {
                MDC.clear();
            }
        };
    }

}

After Some debugging i have found the reason why it happens.经过一些调试后,我找到了它发生的原因。 When there is a nested data object inside another data object the nested data object is completed in the thread in which the outer object is completed.当在另一个数据 object 中存在嵌套数据 object 时,嵌套数据 object 在完成外部 ZA8CFDE6831BD59EB2AC66Z 的线程中完成。

For eg: Author is a data obj and inside author we have another data obj books .例如: Author是一个数据对象,在作者内部我们有另一个数据对象books Now let's say both are in different service and both require http call to get the data and books needs an author id to be completed.现在假设两者都处于不同的服务中,并且都需要调用 http 来获取数据,并且books需要一个author id才能完成。 we call author api to get the data and it runs in okhttp threadpool then the next book api call will happen in okhttp threadpool.我们调用作者 api 来获取数据,它在 okhttp 线程池中运行,然后下一本书 api 调用将在 okhttp 线程池中发生。 we solved the problem by resolving author in a sperate thread and setting the mdc context for that thread.我们通过在一个单独的线程中解析作者并为该线程设置 mdc 上下文来解决该问题。

But one issue we faced we couldn't do it on the main query resolver thread.但是我们面临的一个问题是我们无法在主查询解析器线程上做到这一点。 In the above eg let's say another User obj exits which wraps authors and it is the root query resolver .在上面的例子中,假设另一个User obj exits 包装了authors ,它是root query resolver Then what happens is UserQueryResolver completes in graphql-exex-1 but field resolver will complete in graphql-exec-2 or graphql-exec-1 which is random.然后发生的是 UserQueryResolver 在graphql-exex-1中完成,但字段解析器将在graphql-exec-2graphql-exec-1中完成,这是随机的。 It is fine but if there is better approach in which we can always completes the field resolvers in the root query resolver thread it will be great.很好,但是如果有更好的方法可以让我们始终在根查询解析器线程中完成字段解析器,那就太好了。

Thanks.谢谢。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM