簡體   English   中英

Kafka-Streams:聚合(KTable,KTable)連接的結果

[英]Kafka-Streams: Aggregate over the result of a (KTable, KTable) join

我得到了三個主題。 每個主題都有密鑰和有效負載。 我嘗試加入前兩個主題,聚合結果,最后將這個結果與第三個主題一起加入。 但它沒有按預期工作。

讓我通過一個簡單的例子來說明這種情況:

Topic 1 "Company": 
- Key:1234, {"id":"1234", ...}
...

Topic 2 "Mapping":
- Key:5678, {"id":"5678", "company_id":"1234", "category_id":"9876}
- Key:5679, {"id":"5679", "company_id":"1234", "category_id":"9877}
...

Topic 3 "Categories":
- Key:9876, {"id":"9876", "name":"foo"}
- Key:9877, {"id":"9877", "name":"bar"}
...

我希望每個公司都有一個所有相關類別的列表。 我嘗試將“映射”與“類別”結合起來,並在結果上聚合“名稱”。 這失敗了,拋出以下錯誤:

org.apache.kafka.streams.errors.StreamsException:無法初始化處理器 KTABLE-FK-JOIN-OUTPUT-0000000018

處理器 KTABLE-FK-JOIN-OUTPUT-0000000018 無權訪問 StateStore KTABLE-FK-JOIN-OUTPUT-STATE-STORE-0000000019,因為該存儲未連接到處理器。

我試過:

    var joined = mappedTable
                    .leftJoin(
                            categoriesTable,
                            mappedForeignKey -> String.valueOf(mappedForeignKey.getCategoryId()),
                            (mapping, categories) -> new CategoriesMapping(mapping.getCompanyId(), categories.getName()),
                            Materialized.with(Serdes.String(), mappedSerde)
                    )
                    .groupBy((key, mapping) -> new KeyValue<>(String.valueOf(mapping.getCompanyId()), mapping), Grouped.with(Serdes.String(), mappedSerde))
                    .aggregate(
                            // ...
                    );

(我跳過了連接表最終與“公司”表連接的部分)

聚合函數做這樣的事情: [{mappedValue1},{mappedValue2}] 並且它在沒有表連接的情況下工作。

有沒有辦法讓這種連接聚合發生? 是否有可能有這樣的輸出:

key, value:{"id":..., ..., "name":[{foo},{bar}, ...]}

全棧跟蹤:

Exception in thread "company_details-16eef466-408a-4271-94ec-adad071b4d24-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: failed to initialize processor KTABLE-FK-JOIN-OUTPUT-0000000018
    at org.apache.kafka.streams.processor.internals.ProcessorNode.init(ProcessorNode.java:97)
    at org.apache.kafka.streams.processor.internals.StreamTask.initTopology(StreamTask.java:608)
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeTopology(StreamTask.java:336)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.transitionToRunning(AssignedTasks.java:118)
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.updateRestored(AssignedStreamsTasks.java:349)
    at org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:390)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:769)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
Caused by: org.apache.kafka.streams.errors.StreamsException: Processor KTABLE-FK-JOIN-OUTPUT-0000000018 has no access to StateStore KTABLE-FK-JOIN-OUTPUT-STATE-STORE-0000000019 as the store is not connected to the processor. If you add stores manually via '.addStateStore()' make sure to connect the added store to the processor by providing the processor name to '.addStateStore()' or connect them via '.connectProcessorAndStateStores()'. DSL users need to provide the store name to '.process()', '.transform()', or '.transformValues()' to connect the store to the corresponding operator. If you do not add stores manually, please file a bug report at https://issues.apache.org/jira/projects/KAFKA.
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.getStateStore(ProcessorContextImpl.java:104)
    at org.apache.kafka.streams.kstream.internals.KTableSource$KTableSourceProcessor.init(KTableSource.java:84)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.init(ProcessorNode.java:93)

and

java.lang.IllegalStateException: Expected postgres_company_categories-STATE-STORE-0000000000 to have been initialized
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:284) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:177) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:680) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:788) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:80) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.closeTask(AssignedStreamsTasks.java:36) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.AssignedTasks.shutdown(AssignedTasks.java:256) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.shutdown(AssignedStreamsTasks.java:534) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:292) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1115) ~[kafka-streams-2.4.0.jar:na]
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:683) ~[kafka-streams-2.4.0.jar:na]

您遇到的是一個錯誤: https : //issues.apache.org/jira/browse/KAFKA-9517

該錯誤已針對即將發布的 2.4.1 和 2.5.0 版本修復。

作為一種解決方法,您可以通過將Materialize.as("some-name")傳遞到leftJoin()顯式實現連接結果。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM