繁体   English   中英

Storm-Kafka喷口未在Zookeeper群集中创建节点。

[英]Storm-Kafka spout not creating node in zookeeper cluster.

我使用Storm-kafka来使用Storm 0.10和Kafka 0.9.0.0。 每当我在群集上运行拓扑时,它都会从头开始读取,尽管我从属性文件中将zkRoot和使用者groupId设置为-

kafka.zkHosts=myserver.myhost.com:2181
kafka.topic=onboarding-mail-topic
kafka.zkRoot=/kafka-storm
kafka.group.id=onboarding

喷口:

BrokerHosts zkHosts = new ZkHosts(prop.getProperty("kafka.zkHosts"));
                    String topicName = prop.getProperty("kafka.topic");
                    String zkRoot = prop.getProperty("kafka.zkRoot");
                    String groupId = prop.getProperty("kafka.group.id");

                    //kafka spout conf
                    SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, topicName, zkRoot, groupId);

                    kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

                    KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);

当我检查Zookeeper ls /它不告诉我kafka-storm

[controller_epoch, controller, brokers, storm, zookeeper, kafka-manager, admin, isr_change_notification, consumers, config]

终于,我明白了。 由于从卡夫卡读取和向卡夫卡写入偏移量的控制方式不同。

如果要在风暴群集上运行拓扑,而不考虑单节点或多节点,请确保在storm.yaml文件中设置了以下内容

storm.zookeeper.servers

storm.zookeeper.port

除zkHosts和zkRoot以及消费者组ID之外的其他属性。

或最佳做法是在创建KafkaSpout时通过设置正确的值来覆盖拓扑中的这些属性,例如-

        BrokerHosts zkHosts = new ZkHosts(prop.getProperty("kafka.zkHosts"));
        String topicName = prop.getProperty("kafka.topic");
        String zkRoot = prop.getProperty("kafka.zkRoot");
        String groupId = prop.getProperty("kafka.group.id");
        String kafkaServers = prop.getProperty("kafka.zkServers");
        String zkPort = prop.getProperty("kafka.zkPort");
        //kafka spout conf
        SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, topicName, zkRoot, groupId);

        kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

        kafkaConfig.zkServers = Arrays.asList(kafkaServers);
        kafkaConfig.zkPort = Integer.valueOf(zkPort);

        KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);

甚至可以将这些值放在Config对象中。 这样做会更好,因为您可能希望将偏移量信息存储到其他一些Zookeeper集群,而您的拓扑将从完全不同的代理读取消息

KafkaSpout程式码片段,供您理解-

 @Override
public void open(Map conf, final TopologyContext context, final SpoutOutputCollector collector) {
    _collector = collector;

    Map stateConf = new HashMap(conf);
    List<String> zkServers = _spoutConfig.zkServers;
    if (zkServers == null) {
        zkServers = (List<String>) conf.get(Config.STORM_ZOOKEEPER_SERVERS);
    }
    Integer zkPort = _spoutConfig.zkPort;
    if (zkPort == null) {
        zkPort = ((Number) conf.get(Config.STORM_ZOOKEEPER_PORT)).intValue();
    }
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_SERVERS, zkServers);
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_PORT, zkPort);
    stateConf.put(Config.TRANSACTIONAL_ZOOKEEPER_ROOT, _spoutConfig.zkRoot);
    _state = new ZkState(stateConf);

    _connections = new DynamicPartitionConnections(_spoutConfig, KafkaUtils.makeBrokerReader(conf, _spoutConfig));

    // using TransactionalState like this is a hack
    int totalTasks = context.getComponentTasks(context.getThisComponentId()).size();
    if (_spoutConfig.hosts instanceof StaticHosts) {
        _coordinator = new StaticCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
    } else {
        _coordinator = new ZkCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
    }

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM