簡體   English   中英

在Apache Storm上為螺栓添加重試機制

[英]Add retry mechanism for bolt on Apache Storm

我的風暴拓撲中有一個螺栓(調度程序),可以打開http請求連接。

如果發生故障(連接超時,故障狀態等),我想添加重試機制。 重試應該僅在調度程序螺栓中進行,而不能從整個拓撲開始。

通常我要做的是添加一個隊列,該隊列負責重試和異常處理(例如3次后自動將消息分發到錯誤隊列。)

在螺栓內做這樣的事情可以嗎? 任何人都有任何經驗,可以建議我可以使用哪個庫?

當然! 這似乎是處理錯誤的合理方法。 除了提供用於連接到您選擇的排隊系統的API的庫之外,我不確定您需要使用哪種庫。

在您的螺栓內,您可能具有如下代碼:

@Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
   try {
      // do something which might fail here...
   } catch (Exception e) {
      // do you want to log the error?
      LOG.error("Bolt error {}", e);
      // do you want the error to show up in storm UI?
      collector.reportError(e);
      // or just put information on the queue for processing later
   }
}

只要您在螺栓內部捕獲到異常,拓撲就不會重新啟動。

另一個選擇是利用Storm的內置功能來確保消息處理以使元組失敗並以這種方式重試。

package banktransactions;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

import org.apache.log4j.Logger;

import backtype.storm.spout.SpoutOutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichSpout;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values;

public class TransactionsSpouts extends BaseRichSpout{

private static final Integer MAX_FAILS = 2;
Map<Integer,String> messages;
Map<Integer,Integer> transactionFailureCount;
Map<Integer,String> toSend;
private SpoutOutputCollector collector;  

static Logger LOG = Logger.getLogger(TransactionsSpouts.class);


public void ack(Object msgId) {
    messages.remove(msgId);
    LOG.info("Message fully processed ["+msgId+"]");
}

public void close() {

}

public void fail(Object msgId) {
    if(!transactionFailureCount.containsKey(msgId))
        throw new RuntimeException("Error, transaction id not found ["+msgId+"]");
    Integer transactionId = (Integer) msgId;

    //Get the transactions fail
    Integer failures = transactionFailureCount.get(transactionId) + 1;
    if(failures >= MAX_FAILS){
        //If exceeds the max fails will go down the topology
        throw new RuntimeException("Error, transaction id ["+transactionId+"] has had many errors ["+failures+"]");
    }
    //If not exceeds the max fails we save the new fails quantity and re-send the message 
    transactionFailureCount.put(transactionId, failures);
    toSend.put(transactionId,messages.get(transactionId));
    LOG.info("Re-sending message ["+msgId+"]");
}

public void nextTuple() {
    if(!toSend.isEmpty()){
        for(Map.Entry<Integer, String> transactionEntry : toSend.entrySet()){
            Integer transactionId = transactionEntry.getKey();
            String transactionMessage = transactionEntry.getValue();
            collector.emit(new Values(transactionMessage),transactionId);
        }
        /*
         * The nextTuple, ack and fail methods run in the same loop, so
         * we can considerate the clear method atomic
         */
        toSend.clear();
    }
    try {
        Thread.sleep(1);
    } catch (InterruptedException e) {}
}

public void open(Map conf, TopologyContext context,
        SpoutOutputCollector collector) {
    Random random = new Random();
    messages = new HashMap<Integer, String>();
    toSend = new HashMap<Integer, String>();
    transactionFailureCount = new HashMap<Integer, Integer>();
    for(int i = 0; i< 100; i++){
        messages.put(i, "transaction_"+random.nextInt());
        transactionFailureCount.put(i, 0);
    }
    toSend.putAll(messages);
    this.collector = collector;
}

public void declareOutputFields(OutputFieldsDeclarer declarer) {
    declarer.declare(new Fields("transactionMessage"));
}

}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM