簡體   English   中英

為什么在第一次處理元組后,httpcomponents會減慢我的拓撲?

[英]Why does httpcomponents slow down my topology after the first processing of tuples?

我建立了一個Storm拓撲,該拓撲通過kafka-spout從Apache-Kafka接收元組,將此數據(使用另一個螺栓)作為String寫入本地系統上的.txt文件中,然后從PostBolt發送httpPost 。

兩個螺栓都連接到Kafka-Spout。

如果我在沒有PostBolt的情況下測試拓撲,則一切正常。 但是,如果我將螺栓添加到拓撲中,則由於某種原因整個拓撲將被阻塞。

有誰遇到同樣的問題或對我有暗示,是什么原因造成的?

我已經讀到,存在CloseableHttpClient或CloseableHttpResponse阻止線程工作的某些問題……在這種情況下可能是相同的問題嗎?


我的PostBolt代碼:

public class PostBolt extends BaseRichBolt {

private CloseableHttpClient httpclient; 

@Override
public final void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
    //empty for now
}

@Override
public final void execute(Tuple tuple) {

    //create HttpClient:
    httpclient = HttpClients.createDefault();
    String url = "http://xxx.xxx.xx.xxx:8080/HTTPServlet/httpservlet";
    HttpPost post = new HttpPost(url);

    post.setHeader("str1", "TEST TEST TEST");

    try {
        CloseableHttpResponse postResponse;
        postResponse = httpclient.execute(post);
        System.out.println(postResponse.getStatusLine());
        System.out.println("=====sending POST=====");
        HttpEntity postEntity = postResponse.getEntity();
        //do something useful with the response body
        //and ensure that it is fully consumed
        EntityUtils.consume(postEntity);
        postResponse.close();
    }catch (Exception e){
         e.printStackTrace();
    }
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
    declarer.declare(new Fields("HttpPost"));
}}

我的拓撲代碼:

public static void main(String[] args) throws Exception {

    /**
    *   create a config for Kafka-Spout (and Kafka-Bolt)
    */
    Config config = new Config();
    config.setDebug(true);
    config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 1);
    //setup zookeeper connection
    String zkConnString = "localhost:2181";
    //define Kafka topic for the spout
    String topic = "mytopic";
    //assign the zookeeper connection to brokerhosts
    BrokerHosts hosts = new ZkHosts(zkConnString);

    //setting up spout properties
    SpoutConfig kafkaSpoutConfig = new SpoutConfig(hosts, topic, "/" +topic, UUID.randomUUID().toString());
    kafkaSpoutConfig.bufferSizeBytes = 1024 * 1024 * 4;
    kafkaSpoutConfig.fetchSizeBytes = 1024 * 1024 * 4;
    kafkaSpoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

    /**
    *   Build the Topology by linking the spout and bolts together
    */
    TopologyBuilder builder = new TopologyBuilder();
    builder.setSpout("kafka-spout", new KafkaSpout(kafkaSpoutConfig));
    builder.setBolt("printer-bolt", new PrinterBolt()).shuffleGrouping("kafka-spout");
    builder.setBolt("post-bolt", new PostBolt()).shuffleGrouping("kafka-spout");

    /**
    *   Check if we're running locally or on a real cluster
    */
    if (args != null && args.length >0) {
        config.setNumWorkers(6);
        config.setNumAckers(6);
        config.setMaxSpoutPending(100);
        config.setMessageTimeoutSecs(20);
        StormSubmitter.submitTopology("StormKafkaTopology", config, builder.createTopology());
    } else {
        config.setMaxTaskParallelism(3);
        config.setNumWorkers(6);
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("StormKafkaTopology", config, builder.createTopology());
        //Utils.sleep(100000);
        //cluster.killTopology("StormKafkaTopology");
        //cluster.shutdown();
    }
}}

在我看來,您已經回答了您的問題,但是...根據此回答,您應該使用PoolingHttpClientConnectionManager,因為您將在多線程環境中運行。

編輯:

public class PostBolt extends BaseRichBolt {
    private static Logger LOG = LoggerFactory.getLogger(PostBolt.class);
    private CloseableHttpClient httpclient;
    private OutputCollector _collector;        

    @Override
    public final void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        httpclient = HttpClients.createDefault();
        _collector = collector;
    }

    @Override
    public final void execute(Tuple tuple) {
        String url = "http://xxx.xxx.xx.xxx:8080/HTTPServlet/httpservlet";
        HttpPost post = new HttpPost(url);
        post.setHeader("str1", "TEST TEST TEST");

        CloseableHttpResponse postResponse = httpclient.execute(post);
        try {
            LOG.info(postResponse.getStatusLine());
            LOG.info("=====sending POST=====");
            HttpEntity postEntity = postResponse.getEntity();
            //do something useful with the response body
            //and ensure that it is fully consumed
            EntityUtils.consume(postEntity);
            postResponse.close();
        }catch (Exception e){
           LOG.error("SolrIndexerBolt prepare error", e);
           _collector.reportError(e);
        } finally {
           postResponse.close()
        }

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("HttpPost"));
    }

}

好吧,我根據此評論確定了問題https://stackoverflow.com/a/32080845/7208987

Kafka Spout將繼續重新發送元組,這些元組沒有被發送到的“端點”所認可。

因此,我只需要確認螺栓中傳入的元組,拓撲的構造就消失了。

(我發現了問題,因為即使沒有來自kafkaspout的進一步輸入,printerbolt仍會繼續寫入)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM