简体   繁体   English

现有CDH 5.5.2集群上的Kafka配置

[英]Kafka configuration on existing CDH 5.5.2 cluster

I am installing Kafka-2.0 on my existing CDH 5.5.2 cluster, here is the procedure what i followed 我正在现有的CDH 5.5.2群集上安装Kafka-2.0,这是我遵循的步骤

  1. Add services from CM 从CM添加服务
  2. Selected Kafka (Before that i downloaded and distributes and activated kafka parcel on all the nodes) 选定的Kafka(在我下载并分发并激活所有节点上的kafka包裹之前)
  3. Selected 1 nodes for KafkaBroker and 4 nodes for Kafka MirrorMaker 为KafkaBroker选择了1个节点,为Kafka MirrorMaker选择了4个节点
  4. Then i updated my Destination Broker List (bootstrap.servers) property with one of the Mirror Maker node as well as Source Broker List (source.bootstrap.servers) with same node 然后,我使用Mirror Maker节点之一以及同一个节点的Source Broker List(source.bootstrap.servers)更新了我的Destination Broker List(bootstrap.servers)属性。
  5. Below error i am getting (log file) 我收到以下错误(日志文件)

     Fatal error during KafkaServerStartable startup. Prepare to shutdown java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) at kafka.log.SkimpyOffsetMap.<init>(OffsetMap.scala:43) at kafka.log.LogCleaner$CleanerThread.<init>(LogCleaner.scala:186) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.immutable.Range.foreach(Range.scala:166) at scala.collection.TraversableLike$class.map(TraversableLike.scala:245) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.log.LogCleaner.<init>(LogCleaner.scala:83) at kafka.log.LogManager.<init>(LogManager.scala:64) at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:601) at kafka.server.KafkaServer.startup(KafkaServer.scala:180) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala) 

you need to increase broker_max_heap_size value to atleast 1GB and restart the kafka service from Cloudera Manager. 您需要将broker_max_heap_size值增加到至少1GB,然后从Cloudera Manager重新启动kafka服务。 If you still face the same issue, try to increase as per your cluster configurations 如果仍然遇到相同的问题,请尝试根据群集配置增加

在此处输入图片说明

The stack trace shows "java.lang.OutOfMemoryError: Java heap space" - VM heap is running out of space. 堆栈跟踪显示“ java.lang.OutOfMemoryError:Java堆空间”-VM堆空间不足。 Increase it by setting 通过设置增加

export KAFKA_HEAP_OPTS="-Xmx1G -Xms512M" 

in /bin/kafka-server-start.sh. 在/bin/kafka-server-start.sh中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM