简体   繁体   English

线程“StreamThread-1”中的异常org.apache.kafka.streams.errors.StreamsException:无法重新平衡

[英]Exception in thread “StreamThread-1” org.apache.kafka.streams.errors.StreamsException: Failed to rebalance

I created a topic and i put a simple-producer to publish some message in that topic 我创建了一个主题,我让一个简单的生产者在该主题中发布一些消息

 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-file-input

bin/kafka-console-producer.sh --broker-list localhost:9092 --streams-file-input

I am running the below simple example in kafka streams and i got a weird exception which i cannot handle 我在kafka流中运行以下简单示例,我得到了一个奇怪的异常,我无法处理

 Properties props = new Properties();
            props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
            props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.3:9092");
            props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
            props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

            // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
            props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

            KStreamBuilder builder = new KStreamBuilder();

            builder.stream("streams-file-input").to("streams-pipe-output");

            KafkaStreams streams = new KafkaStreams(builder, props);
            streams.start();

            // usually the stream application would be running forever,
            // in this example we just let it run for some time and stop since the input data is finite.
            Thread.sleep(5000L);

            streams.close();

 Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
            at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
            at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
        Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
            at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
            at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
            at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
            at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
            at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
            at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
            at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
            at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
            at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
            at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
            at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
            at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
            at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
            at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
            at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
            at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
            at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
            at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
            at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
            at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
            at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
            ... 1 more
        Caused by: java.io.FileNotFoundException: C:\tmp\kafka-streams\my-streapplication\0_0\.lock (The system cannot find the path specified)
            at java.io.RandomAccessFile.open0(Native Method)
            at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
            at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
            at org.apache.kafka.streams.processor.internals.ProcessorStateManager.lockStateDirectory(ProcessorStateManager.java:125)
            at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:93)
            at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)

  <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-streams</artifactId>
      <version>0.10.0.0</version>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>0.10.0.0</version>
    </dependency>

Whatever i did i got this exception. 无论我做了什么,我都得到了这个例外。 I am running kafka cluster in vmware with Ubuntu(the version i use is kafka_2.11-0.10.0.0 ) Maybe the problem is the ram-Cpu? 我在vmware中使用Ubuntu运行kafka集群(我使用的版本是kafka_2.11-0.10.0.0 )也许问题是ram-Cpu?

Caused by: java.io.FileNotFoundException: C:\tmp\kafka-streams\my-streapplication\0_0\.lock (The system cannot find the path specified)

it means that the parent directory C:\\tmp\\kafka-streams for your application state does not exsist. 这意味着您的应用程序状态的父目录C:\\tmp\\kafka-streams不存在。 It is a default directory in StreamConfig . 它是StreamConfig的默认目录。 I don't know why it's created failed on Windows. 我不知道为什么它在Windows上创建失败。

You can set StreamConfig.STATE_DIR_CONFIG as a specified directory. 您可以将StreamConfig.STATE_DIR_CONFIG设置为指定目录。

thanks to @Muyoo this is the correct fix: 感谢@Muyoo这是正确的解决方法:

        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG,"my-stremapplication");
        props.put(StreamsConfig.STATE_DIR_CONFIG, "streams-pipe");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.210:9092");
        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka 流异常:org.apache.kafka.streams.errors.StreamsException - 反序列化异常处理程序 - Kafka streams Exception: org.apache.kafka.streams.errors.StreamsException - Deserialization exception handler 线程“main”org.apache.kafka.streams.errors.InvalidStateStoreException 中的异常: - Exception in thread "main" org.apache.kafka.streams.errors.InvalidStateStoreException: 错误 org.apache.kafka.common.utils.KafkaThread - 线程 &#39;kafka-producer-network-thread 中未捕获的异常 - ERROR org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread 'kafka-producer-network-thread 除了线程“main”org.apache.kafka.common.KafkaException:无法构建kafka消费者 - Except in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka consumer kafka 流异常找不到 org.apache.kafka.common.serialization.Serdes$WrapperSerde 的公共无参数构造函数 - kafka streams exception Could not find a public no-argument constructor for org.apache.kafka.common.serialization.Serdes$WrapperSerde 发现异常.....org.apache.kafka.common.KafkaException:无法使用自定义对象序列化程序构建 kafka 生产者 - Exception found.....org.apache.kafka.common.KafkaException: Failed to construct kafka producer using custom object Serializer org.apache.kafka.streams.examples.wordcount.WordCountDemo不终止 - org.apache.kafka.streams.examples.wordcount.WordCountDemo does not terminate 获取间歇性 KafkaProducerException:无法发送 org.apache.kafka.common.errors.TimeoutException - Getting intermittent KafkaProducerException: Failed to send org.apache.kafka.common.errors.TimeoutException 创建主题失败","异常":"\norg.apache.kafka.common.errors.UnsupportedVersionException - Failed to create topics","exception":"\norg.apache.kafka.common.errors.UnsupportedVersionException Spark和Kafka问题-线程“ main”中的异常java.lang.NoClassDefFoundError:org.apache.spark.streaming.kafka010.LocationStrategies - Spark and Kafka issue - Exception in thread “main” java.lang.NoClassDefFoundError: org.apache.spark.streaming.kafka010.LocationStrategies
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM