簡體   English   中英

GlobalKTable - StreamsException:遇到與任何全局狀態存儲無關的主題分區

[英]GlobalKTable - StreamsException: Encountered a topic-partition not associated with any global state store

我正在嘗試使用 Kafka-streams 從流中創建 GlobalKTable,並在調用streams.start()時出現異常:

org.apache.kafka.streams.errors.StreamsException:遇到與任何全局狀態存儲無關的主題分區

我的代碼是:

private final KafkaStreams streams;
private final StoreQueryParameters<ReadOnlyKeyValueStore<LocalDate, String>> bankHolidayTypesSqp = StoreQueryParameters.fromNameAndType("bank_holiday_type_store"
            ,QueryableStoreTypes.<LocalDate, String>keyValueStore());
private final ReadOnlyKeyValueStore<LocalDate, String> localBankHolidayTypeStore;

private void instantiateKafka()
{
        // configure Kafka
        
        StreamsBuilder builder = new StreamsBuilder();
       
       // CustomSerializableSerde is just a generic serializer that uses standard java Base64 encoding on any object that implements Serializable - it works in a dummy application I've tested, so I don't think it's the problem
        addGlobalTableToStreamsBuilder(builder, bankHolidayTypeTopic,"bank_holiday_type_store", new CustomSerializableSerde<LocalDate>(),Serdes.String());
        
        streams = createStreams("localhost:9092", "C:\\Kafka\\tmp\\kafka-streams-global-tables",MyClass.class.getName(),builder);
        streams.start(); // hangs until the global table is built
}

public static <Tk extends Serializable,Tv extends Serializable> StreamsBuilder addGlobalTableToStreamsBuilder(StreamsBuilder builder, String topic
            , String store, Serde<Tk> keySerializer, Serde<Tv> valueSerializer)
    {
        builder.globalTable(topic, Materialized.<Tk, Tv, KeyValueStore<Bytes, byte[]>>as(store)
                .withKeySerde(keySerializer)
                .withValueSerde(valueSerializer));
        return builder;
    }

public static KafkaStreams createStreams(final String bootstrapServers, final String stateDir, String clientID, StreamsBuilder finishedBuilder) 
     {
        final Properties streamsConfiguration = new Properties();
        // Give the Streams application a unique name.  The name must be unique in the Kafka cluster against which the application is run.
        streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "applicationName");
        streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, clientID);
        // Where to find Kafka broker(s).
        streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        streamsConfiguration.put(StreamsConfig.STATE_DIR_CONFIG, stateDir);
        // Set to earliest so we don't miss any data that arrived in the topics before the process started
        streamsConfiguration.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
                
         return new KafkaStreams(finishedBuilder.build(), streamsConfiguration);
     }
    

制作人:

Producer<LocalDate,String> bankHolidayTypeProducer = MyClass<LocalDate,String>createProducer("localhost:9092", BankHolidayData.class.getName()
                    , CustomSerializer.class.getName(), StringSerializer.class.getName());

//...

HashMap<LocalDate, String> bankHolidaysData = populateBankHolidayMap();

for (LocalDate bhDay : bankHolidaysData.keySet())
            {
                bankHolidayTypeProducer.send(new ProducerRecord<>(bankHolidayTypeTopic, bhDay, bankHolidaysData.get(bhDay)));
            }

public static <Tk extends Serializable, Tv extends Serializable> Producer<Tk,Tv> createProducer(String bootstrapServers
            , String clientID, String keySerializerClassName, String valueSerializerClassName) 
    {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ProducerConfig.CLIENT_ID_CONFIG, clientID);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, keySerializerClassName);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, valueSerializerClassName);
        return new KafkaProducer<>(props);
    }

我的主題由生產者在首次生成時自動創建,並且在 GlobalKTable 嘗試從中讀取時將始終存在。 這是問題嗎? 在設置主題時我需要做些什么來告訴 Kafka 它將被 Streams GlobalKTable 使用嗎?

主題的結構(顯然)發生了一些變化,這意味着需要重置 Streams。 為此,您可以使用應用程序 Conduktor,或在http://docs.confluent.io/current/streams/developer-guide.html#application-reset-tool 中找到的重置工具。

如果這對某人有幫助,那么如果您在 GlobalKTable 的消費者端更新任何模式,並且在 Kafka 端的模式發生變化時,也會發生這種情況。 所以對我來說修復的只是刪除我的本地狀態存儲文件夾,通常存在於您的項目根目錄中。

如果您正在運行本地實例/或者您有能力,那么

  • 停止經紀人
  • 停止動物園管理員
  • 刪除環境變量 java.io.tmpdir 引用的父文件夾下的 kafka-streams 文件夾。

您可以在 Broker 日志中獲取真實的文件夾名稱,搜索“INFO Client environment:java.io.tmpdir”

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM