简体   繁体   中英

Spring reactor: How to wait for multiple Flux's by key?

Conceptually, I have a source that emits IP addresses (indefinitely) and two processors.

Those processors essentially make IO requests. What I'd like to do is to merge the results of these processors when they're done and to pass them to some sink which could handle both results together.

I tried to write some toy example, but it doesn't work as the source Flux never ends.

What's the right way doing that?

public class Demo {

    public static void main(String[] args) throws Exception {


        Flux<String> source = Flux.fromIterable(Lists.newArrayList("1.1.1.1", "2.2.2.2", "3.3.3.3")).delayElements(Duration.ofMillis(500)).repeat();
        ConnectableFlux<String> ipsFlux = source.publish();

        Flux<Foo> fooFlux1 = Flux.from(ipsFlux)
                .map(ip -> new Foo(ip, "1"));

        Flux<Foo> fooFlux2 = Flux.from(ipsFlux)
                .map(ip -> new Foo(ip, "2"));

        Flux.merge(fooFlux1, fooFlux2)
                .groupBy(Foo::getId, Function.identity())
                .subscribe(flux -> flux.collectMap(foo -> foo.type).subscribe(System.out::println));

        ipsFlux.connect();

        Thread.currentThread().join();

    }

    static class Foo {
        String id;
        String type;

        public Foo(String id, String type) {
            this.id = id;
            this.type = type;
        }

        public String getId() {
            return id;
        }

        @Override
        public String toString() {
            return "Foo{" +
                    "id='" + id + '\'' +
                    ", value='" + type + '\'' +
                    '}';
        }
    }
}

looking at the documentation of the merge operator ( https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#merge-org.reactivestreams.Publisher...- ) it seems that merge is not suited to handle infinte streams:

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.

i would try the zip operator ( https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#zip-org.reactivestreams.Publisher-org.reactivestreams.Publisher- )

Flux<Tuple2<Foo, Foo>> zipped = Flux.zip(fooFlux1, fooFlux2);

then your sink can consume a pair of Foo as soon as its available.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM