简体   繁体   English

供应商绑定不适用于 spring 云流兔子

[英]Supplier binding is not working with spring cloud stream rabbit

We have a source like follows and we are using spring cloud stream rabbit binder 3.0.1.RELEASE.我们有如下来源,我们正在使用 spring cloud stream rabbit binder 3.0.1.RELEASE。

@Component
public class Handlers {

  private EmitterProcessor<String> sourceGenerator = EmitterProcessor.create();

  public void emitData(String str){
    sourceGenerator.onNext(str);
  }

  @Bean
  public Supplier<Flux<String>> generate() {
    return () -> sourceGenerator;
  }

  @Bean
  public Function<String, String> process() {
    return str -> str.toUpperCase();
  }


}

application.yml应用程序.yml

spring:
  profiles: dev
  cloud:
    stream:
      function:
        definition: generate;process
        bindings:
          generate-out-0: source1
          process-in-0: source1
          process-out-0: processed

        bindingServiceProperties:
          defaultBinder: local_rabbit

      binders:
        local_rabbit:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: localhost
                port: 5672
                username: guest
                password: guest
                virtual-host: / 

While calling emitData method, we are not seeing data in RabbitMQ queue.在调用emitData方法时,我们没有在RabbitMQ 队列中看到数据。 We also observed that consumer binding is working.我们还观察到消费者绑定正在发挥作用。 That we checked by means of direct sending of messages into a consumer linked queue through RabbitMQ Admin.我们通过 RabbitMQ Admin 将消息直接发送到消费者链接队列来检查。 But supplier binding is not working.但是供应商绑定不起作用。

Also, we observed that Supplier without Flux is working fine with the same application.yml configuration.此外,我们观察到没有Flux Supplier在相同的application.yml配置下工作正常。 Are we missing any configuration here?我们在这里缺少任何配置吗?

Even test case with TestChannelBinderConfiguration is working fine as follows.即使使用 TestChannelBinderConfiguration 的测试用例也能正常工作,如下所示。

@Slf4j
@TestPropertySource(
        properties = {"spring.cloud.function.definition = generate|process"}
)
public class HandlersTest extends AbstractTest {
  @Autowired
  private OutputDestination outputDestination;

  @Test
  public void testGeneratorAndProcessor() {
      final String testStr = "test"; 
      handlers.emitData(testStr);

      Object eventObj;
      final Message<byte[]> message = outputDestination.receive(1000);

      assertNotNull(message, "processing timeout");
      eventObj = message.getPayload();

      assertEquals(new String((byte[]) eventObj), testStr.toUpperCase());
  }
}

When you say we are not seeing data in RabbitMQ queue. . .当您说we are not seeing data in RabbitMQ queue. . . we are not seeing data in RabbitMQ queue. . . . . Which queue are you talking about?你说的是哪个队列? When using AMQP, messages are sent to exchanges and if such exchange is not bound to any queue the message is dropped, hence my question.使用 AMQP 时,消息被发送到exchanges ,如果这样的交换没有绑定到任何queue ,消息就会被丢弃,因此我的问题。 Did you actually bind generate-out-0 exchange to a queue?您实际上是否将generate-out-0交换绑定到队列?

In any event, I just tested it and all is working as expected.无论如何,我只是对其进行了测试,一切都按预期工作。 Here is the complete code.这是完整的代码。

@SpringBootApplication
public class SimpleStreamApplication {

    public static void main(String[] args) throws Exception {
        ApplicationContext context = SpringApplication.run(SimpleStreamApplication.class);
        SimpleStreamApplication app = context.getBean(SimpleStreamApplication.class);
        app.emitData("Hello");
    }

    private EmitterProcessor<String> sourceGenerator = EmitterProcessor.create();

    public void emitData(String str) {
        sourceGenerator.onNext(str);
    }

    @Bean
    public Supplier<Flux<String>> generate() {
        return () -> sourceGenerator;
    }
}

While I appreciate you posting a project unfortunately your story continues to change and I am still not sure what is it that you want to accomplish.虽然我很感激你发布一个项目,但不幸的是你的故事继续改变,我仍然不确定你想要完成什么。 So this is my last response yet I'll try to be as detailed and as informative as I can, so here is what I see from your project.所以这是我的最后一次回复,但我会尽可能详细和提供信息,所以这是我从你的项目中看到的。

  1. Your configuration is faulty.你的配置有问题。 The definition property for functions should spring.cloud.function.definition函数的definition属性应该是spring.cloud.function.definition

. . . . . .

spring:
  cloud:
    function:
       definition: generate;process;sink

. . . . . .

  1. Since you are using ;由于您正在使用; I am assuming you want all 3 functions to be bound independently (no function composition) as described in multiple binding section.我假设您希望所有 3 个函数都独立绑定(没有函数组合),如多重绑定部分所述。

  2. The spring.cloud.stream.function.bindings is a property that allows you to map generated binding name to a custom binding name as described in Function Binding Names . spring.cloud.stream.function.bindings是一个属性,允许您将生成的绑定名称映射到自定义绑定名称,如函数绑定名称中所述 It has nothing to do with the names of the actual destinations.它与实际目的地的名称无关。 For that we have destination property which is also covered in the referenced section (eg, --spring.cloud.stream.bindings.generate-out-0.destination=source1).为此,我们有destination属性,该属性也在参考部分中介绍(例如,--spring.cloud.stream.bindings.generate-out-0.destination=source1)。 However if the destination property is not used the binding name and the destination name is assumed to be the same.但是,如果未使用destination属性,则假定绑定名称和目标名称相同。 However, consumer destination also requires group name and if not provided it generates one.但是,消费者目的地也需要组名,如果没有提供,它会生成一个。 So, based on your configuration your generate-out-0 supplier is bound to source1 exchange :因此,根据您的配置,您的generate-out-0供应商绑定到source1交换

在此处输入图片说明

The input of process-in-in function on the other hand is bound to source1.anonymous... queue :另一方面, process-in-in函数的输入绑定到source1.anonymous... queue

在此处输入图片说明

And as I stated earlier there is no RabbitMQ binding between source1 exchange and source1.anonymous... queue , therefore messages that are sent to source1 exchange are simply dropped.正如我之前所说,在source1交换source1.anonymous... queue之间没有 RabbitMQ 绑定,因此发送到source1交换的消息被简单地丢弃。 By creating such binding (eg, via Rabbit MQ console) the messages would reach the consumer.通过创建这样的绑定(例如,通过 Rabbit MQ 控制台),消息将到达消费者。

That said, such design is very inefficient.也就是说,这样的设计是非常低效的。 Why do you want to send to and receive from the same destination while in the same process space (JVM)?为什么要在同一进程空间 (JVM) 中向同一目的地发送接收 Why abuse the network when you can simply pass by reference?当您可以简单地通过引用传递时,为什么要滥用网络? So at the very least changing definition to spring.cloud.function.definition=generate|process|sink`.因此,至少将definition更改为 spring.cloud.function.definition=generate|process|sink`。 A better solution would simply be write your code in the supplier itself更好的解决方案是在供应商本身中编写您的代码

public void emitData(String str) {
    String uppercased = str.toUpperCase();
    sourceGenerator.onNext(uppercased);
    System.out.println("Emitted: " + str);
}

and be done with it.并完成它。 Anyway, I would strongly suggest for you to go over our user guide specifically the Main Concepts section and Programming Model section as I believe you have misunderstood certain core concepts, which i believe contribute to the inconsistencies in both your post and your questions.无论如何,我强烈建议您仔细阅读我们的用户指南,特别是主要概念部分和编程模型部分,因为我认为您误解了某些核心概念,我认为这会导致您的帖子和问题中的不一致。

We did some changes in the code.我们对代码做了一些改动。 But the issue is still here.但问题仍然存在。 Flux implementation of the supplier is not working.供应商的 Flux 实现不起作用。 Non flux supplier is working fine:非助焊剂供应商工作正常:


    @Bean
    public Supplier<Flux<String>> generate_flux() {
        return () -> sourceGenerator;
    }

    @Bean
    public Supplier<Message<?>> generate_non_flux() {
        return MessageBuilder
           .withPayload("Non flux emitter: " + LocalDateTime.now().toString())::build;
    }

Full source is in the same place完整源在同一个地方

Also we changed application.yml as you suggested, and we did some experiments.我们也按照您的建议更改了application.yml ,并做了一些实验。 Thank you for the explanation about the meaning of topics.感谢您对主题含义的解释。 But we also checked and can say that RabbitMQ automatically links outputs and consumers with the same destination and any specified group names.但是我们也检查并可以说 RabbitMQ 会自动链接具有相同目的地和任何指定组名的输出和消费者。 It works both for explicitly specified groups and random generated ones.它适用于明确指定的组和随机生成的组。 This is not about a parallel processing, this is about an ability of RabbitMQ to link it.这不是关于并行处理,而是关于 RabbitMQ 链接它的能力。

Both generate_flux and generate_non_flux connected to the same output destination: generate_fluxgenerate_non_flux连接到同一个输出目的地:

      bindings:
        generate_flux-out-0:
          destination: source
        generate_non_flux-out-0:
          destination: source

Now the output of the application is:现在应用程序的输出是:

Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801
Flux emitted: 2020-01-09T13:38:51.721094
Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801
Flux emitted: 2020-01-09T13:38:52.725961
Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801
Flux emitted: 2020-01-09T13:38:53.727054
Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801
Flux emitted: 2020-01-09T13:38:54.727898
Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801
Consumed: NON FLUX EMITTER: 2020-01-09T13:38:49.761801

There are processed messages with NON FLUX but there are no flux ones.有处理过的NON FLUX消息,但没有通量消息。

So, non flux emitter works fine but we cannot use it to emit by request.所以,非通量发射器工作正常,但我们不能用它来按要求发射。 Flux implementation for the supplier doesn't work.供应商的 Flux 实现不起作用。 From that we started and we didn't do any changes in the description of the task.从那以后,我们开始了,我们没有对任务的描述做任何更改。

Speaking about our splitting of the code to supplier, processor and sink, we are talking about different types of machines.谈到我们将代码拆分为供应商、处理器和接收器,我们谈论的是不同类型的机器。 supplier - it is a legacy code which is generating data. supplier - 这是一个生成数据的遗留代码。 processor is a memory consuming part of the workflow and we want to keep it on a separate set of VMs with ability to scale it in Kubernetes. processor是工作流中消耗内存的部分,我们希望将其保存在一组单独的 VM 上,以便能够在 Kubernetes 中对其进行扩展。 sink in our case is a specific machine which is storing data into a DB. sink在我们的情况下是其将数据存储到DB中的特定机器。 At the same time, due to legacy code, we want to have common code of the application in general and don't split it into a separate applications like Apache Beam based ones.同时,由于遗留代码,我们希望有应用程序的通用代码,而不是像基于 Apache Beam 的应用程序那样将其拆分为单独的应用程序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM