简体   繁体   English

Storm-Kafka喷口未在Zookeeper群集中创建节点。

[英]Storm-Kafka spout not creating node in zookeeper cluster.

I am usign storm 0.10 and kafka 0.9.0.0 with storm-kafka. 我使用Storm-kafka来使用Storm 0.10和Kafka 0.9.0.0。 Whenever I am running my topology on cluster it starts reading from beginning although I am giving zkRoot and consumer groupId from properties file as - 每当我在群集上运行拓扑时,它都会从头开始读取,尽管我从属性文件中将zkRoot和使用者groupId设置为-

kafka.zkHosts=myserver.myhost.com:2181
kafka.topic=onboarding-mail-topic
kafka.zkRoot=/kafka-storm
kafka.group.id=onboarding

Spout: 喷口:

BrokerHosts zkHosts = new ZkHosts(prop.getProperty("kafka.zkHosts"));
                    String topicName = prop.getProperty("kafka.topic");
                    String zkRoot = prop.getProperty("kafka.zkRoot");
                    String groupId = prop.getProperty("kafka.group.id");

                    //kafka spout conf
                    SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, topicName, zkRoot, groupId);

                    kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

                    KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);

When I check zookeeper ls / it doesn't show me kafka-storm 当我检查Zookeeper ls /它不告诉我kafka-storm

[controller_epoch, controller, brokers, storm, zookeeper, kafka-manager, admin, isr_change_notification, consumers, config]

Finally, I figured it out. 终于,我明白了。 Since reading from kafka and writing offset back to kafka are controlled in a different way. 由于从卡夫卡读取和向卡夫卡写入偏移量的控制方式不同。

If you are running your topology on a storm cluster irrespective of single or multi node make sure you have set following in your storm.yaml file 如果要在风暴群集上运行拓扑,而不考虑单节点或多节点,请确保在storm.yaml文件中设置了以下内容

storm.zookeeper.servers

and

storm.zookeeper.port

properties apart from zkHosts and zkRoot and consumer group id. 除zkHosts和zkRoot以及消费者组ID之外的其他属性。

Or best practice would be to override these properties in your topology by setting correct values while creating KafkaSpout like - 或最佳做法是在创建KafkaSpout时通过设置正确的值来覆盖拓扑中的这些属性,例如-

        BrokerHosts zkHosts = new ZkHosts(prop.getProperty("kafka.zkHosts"));
        String topicName = prop.getProperty("kafka.topic");
        String zkRoot = prop.getProperty("kafka.zkRoot");
        String groupId = prop.getProperty("kafka.group.id");
        String kafkaServers = prop.getProperty("kafka.zkServers");
        String zkPort = prop.getProperty("kafka.zkPort");
        //kafka spout conf
        SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, topicName, zkRoot, groupId);

        kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

        kafkaConfig.zkServers = Arrays.asList(kafkaServers);
        kafkaConfig.zkPort = Integer.valueOf(zkPort);

        KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);

Or even you can put these value in Config object. 甚至可以将这些值放在Config对象中。 This is better since you might want to store offset info to some other zookeeper cluster vs your topology reads message from a completely different broker 这样做会更好,因为您可能希望将偏移量信息存储到其他一些Zookeeper集群,而您的拓扑将从完全不同的代理读取消息

KafkaSpout code snippet for understanding- KafkaSpout程式码片段,供您理解-

 @Override
public void open(Map conf, final TopologyContext context, final SpoutOutputCollector collector) {
    _collector = collector;

    Map stateConf = new HashMap(conf);
    List<String> zkServers = _spoutConfig.zkServers;
    if (zkServers == null) {
        zkServers = (List<String>) conf.get(Config.STORM_ZOOKEEPER_SERVERS);
    }
    Integer zkPort = _spoutConfig.zkPort;
    if (zkPort == null) {
        zkPort = ((Number) conf.get(Config.STORM_ZOOKEEPER_PORT)).intValue();
    }
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_SERVERS, zkServers);
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_PORT, zkPort);
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_ROOT, _spoutConfig.zkRoot);
    _state = new ZkState(stateConf);

    _connections = new DynamicPartitionConnections(_spoutConfig, KafkaUtils.makeBrokerReader(conf, _spoutConfig));

    // using TransactionalState like this is a hack
    int totalTasks = context.getComponentTasks(context.getThisComponentId()).size();
    if (_spoutConfig.hosts instanceof StaticHosts) {
        _coordinator = new StaticCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
    } else {
        _coordinator = new ZkCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
    }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM