繁体   English   中英

在Apache Storm上为螺栓添加重试机制

[英]Add retry mechanism for bolt on Apache Storm

我的风暴拓扑中有一个螺栓(调度程序),可以打开http请求连接。

如果发生故障(连接超时,故障状态等),我想添加重试机制。 重试应该仅在调度程序螺栓中进行,而不能从整个拓扑开始。

通常我要做的是添加一个队列,该队列负责重试和异常处理(例如3次后自动将消息分发到错误队列。)

在螺栓内做这样的事情可以吗? 任何人都有任何经验,可以建议我可以使用哪个库?

当然! 这似乎是处理错误的合理方法。 除了提供用于连接到您选择的排队系统的API的库之外,我不确定您需要使用哪种库。

在您的螺栓内,您可能具有如下代码:

@Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
   try {
      // do something which might fail here...
   } catch (Exception e) {
      // do you want to log the error?
      LOG.error("Bolt error {}", e);
      // do you want the error to show up in storm UI?
      collector.reportError(e);
      // or just put information on the queue for processing later
   }
}

只要您在螺栓内部捕获到异常,拓扑就不会重新启动。

另一个选择是利用Storm的内置功能来确保消息处理以使元组失败并以这种方式重试。

package banktransactions;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

import org.apache.log4j.Logger;

import backtype.storm.spout.SpoutOutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichSpout;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values;

public class TransactionsSpouts extends BaseRichSpout{

private static final Integer MAX_FAILS = 2;
Map<Integer,String> messages;
Map<Integer,Integer> transactionFailureCount;
Map<Integer,String> toSend;
private SpoutOutputCollector collector;  

static Logger LOG = Logger.getLogger(TransactionsSpouts.class);


public void ack(Object msgId) {
    messages.remove(msgId);
    LOG.info("Message fully processed ["+msgId+"]");
}

public void close() {

}

public void fail(Object msgId) {
    if(!transactionFailureCount.containsKey(msgId))
        throw new RuntimeException("Error, transaction id not found ["+msgId+"]");
    Integer transactionId = (Integer) msgId;

    //Get the transactions fail
    Integer failures = transactionFailureCount.get(transactionId) + 1;
    if(failures >= MAX_FAILS){
        //If exceeds the max fails will go down the topology
        throw new RuntimeException("Error, transaction id ["+transactionId+"] has had many errors ["+failures+"]");
    }
    //If not exceeds the max fails we save the new fails quantity and re-send the message 
    transactionFailureCount.put(transactionId, failures);
    toSend.put(transactionId,messages.get(transactionId));
    LOG.info("Re-sending message ["+msgId+"]");
}

public void nextTuple() {
    if(!toSend.isEmpty()){
        for(Map.Entry<Integer, String> transactionEntry : toSend.entrySet()){
            Integer transactionId = transactionEntry.getKey();
            String transactionMessage = transactionEntry.getValue();
            collector.emit(new Values(transactionMessage),transactionId);
        }
        /*
         * The nextTuple, ack and fail methods run in the same loop, so
         * we can considerate the clear method atomic
         */
        toSend.clear();
    }
    try {
        Thread.sleep(1);
    } catch (InterruptedException e) {}
}

public void open(Map conf, TopologyContext context,
        SpoutOutputCollector collector) {
    Random random = new Random();
    messages = new HashMap<Integer, String>();
    toSend = new HashMap<Integer, String>();
    transactionFailureCount = new HashMap<Integer, Integer>();
    for(int i = 0; i< 100; i++){
        messages.put(i, "transaction_"+random.nextInt());
        transactionFailureCount.put(i, 0);
    }
    toSend.putAll(messages);
    this.collector = collector;
}

public void declareOutputFields(OutputFieldsDeclarer declarer) {
    declarer.declare(new Fields("transactionMessage"));
}

}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM