簡體   English   中英

使用Akka Actor的文件操作

[英]File Operations using Akka Actor

與常規的File操作方法相比,使用Akka Actor有什么優勢? 我試圖計算分析日志文件所花費的時間。 該操作是查找已登錄50次以上的IP地址並顯示它們。 與Akka Actor模型相比,正常文件操作更快。 為什么這樣?

使用普通文件操作

public static void main(String[] args) {
        // TODO Auto-generated method stub
        //long startTime = System.currentTimeMillis();
        File file = new File("log.txt");
        Map<String, Long> ipMap = new HashMap<>();

        try {

                FileReader fr = new FileReader(file);
                BufferedReader br = new BufferedReader(fr);
                String line = br.readLine();

                while(line!=null) {
                    int idx = line.indexOf('-');
                    String ipAddress = line.substring(0, idx).trim();
                    long count = ipMap.getOrDefault(ipAddress, 0L);
                    ipMap.put(ipAddress, ++count);
                    line = br.readLine();
                }

                 System.out.println("================================");
                 System.out.println("||\tCount\t||\t\tIP");
                 System.out.println("================================");

                 fr.close();
                 br.close();
                 Map<String, Long> result = new HashMap<>();

                    // Sort by value and put it into the "result" map
                    ipMap.entrySet().stream()
                            .sorted(Map.Entry.<String, Long>comparingByValue().reversed())
                            .forEachOrdered(x -> result.put(x.getKey(), x.getValue()));

                    // Print only if count > 50
                    result.entrySet().stream().filter(entry -> entry.getValue() > 50).forEach(entry ->
                        System.out.println("||\t" + entry.getValue() + "   \t||\t" + entry.getKey())
                    );

//                  long endTime = System.currentTimeMillis();
//                  System.out.println("Time: "+(endTime-startTime));

            } catch (FileNotFoundException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            } catch (IOException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }

    }

Using Actors:
1. The Main Class
 public static void main(String[] args) {
        long startTime = System.currentTimeMillis();
        // Create actorSystem
        ActorSystem akkaSystem = ActorSystem.create("akkaSystem");

        // Create first actor based on the specified class
        ActorRef coordinator = akkaSystem.actorOf(Props.create(FileAnalysisActor.class));

        // Create a message including the file path
        FileAnalysisMessage msg = new FileAnalysisMessage("log.txt");

        // Send a message to start processing the file. This is a synchronous call using 'ask' with a timeout.
        Timeout timeout = new Timeout(6, TimeUnit.SECONDS);
        Future<Object> future = Patterns.ask(coordinator, msg, timeout);

        // Process the results
        final ExecutionContext ec = akkaSystem.dispatcher();
        future.onSuccess(new OnSuccess<Object>() {
            @Override
            public void onSuccess(Object message) throws Throwable {
                if (message instanceof FileProcessedMessage) {
                    printResults((FileProcessedMessage) message);

                    // Stop the actor system
                    akkaSystem.shutdown();
                }
            }

            private void printResults(FileProcessedMessage message) {
                System.out.println("================================");
                System.out.println("||\tCount\t||\t\tIP");
                System.out.println("================================");

                Map<String, Long> result = new LinkedHashMap<>();

                // Sort by value and put it into the "result" map
                message.getData().entrySet().stream()
                        .sorted(Map.Entry.<String, Long>comparingByValue().reversed())
                        .forEachOrdered(x -> result.put(x.getKey(), x.getValue())); 

                // Print only if count > 50
                result.entrySet().stream().filter(entry -> entry.getValue() > 50).forEach(entry ->
                    System.out.println("||\t" + entry.getValue() + "   \t||\t" + entry.getKey())
                );
                long endTime = System.currentTimeMillis();
                System.out.println("Total time: "+(endTime - startTime));
            }

        }, ec);

    }

2.文件分析器類

public class FileAnalysisActor extends UntypedActor {

    private Map<String, Long> ipMap = new HashMap<>();
    private long fileLineCount;
    private long processedCount;
    private ActorRef analyticsSender = null;

    @Override
    public void onReceive(Object message) throws Exception {
        /*
            This actor can receive two different messages, FileAnalysisMessage or LineProcessingResult, any
            other type will be discarded using the unhandled method
         */
            //System.out.println(Thread.currentThread().getName());
        if (message instanceof FileAnalysisMessage) {

            List<String> lines = FileUtils.readLines(new File(
                    ((FileAnalysisMessage) message).getFileName()));

            fileLineCount = lines.size();
            processedCount = 0;

            // stores a reference to the original sender to send back the results later on
            analyticsSender = this.getSender();

            for (String line : lines) {
                // creates a new actor per each line of the log file
                Props props = Props.create(LogLineProcessor.class);
                ActorRef lineProcessorActor = this.getContext().actorOf(props);

                // sends a message to the new actor with the line payload
                lineProcessorActor.tell(new LogLineMessage(line), this.getSelf());
            }

        } else if (message instanceof LineProcessingResult) {

            // a result message is received after a LogLineProcessor actor has finished processing a line
            String ip = ((LineProcessingResult) message).getIpAddress();

            // increment ip counter
            Long count = ipMap.getOrDefault(ip, 0L);
            ipMap.put(ip, ++count);

            // if the file has been processed entirely, send a termination message to the main actor
            processedCount++;
            if (fileLineCount == processedCount) {
                // send done message
                analyticsSender.tell(new FileProcessedMessage(ipMap), ActorRef.noSender());
            }

        } else {
            // Ignore message
            this.unhandled(message);
        }
    }
}

3,Logline處理器類

public class LogLineProcessor extends UntypedActor {

    @Override
    public void onReceive(Object message) throws Exception {
        if (message instanceof LogLineMessage) {
            // What data each actor process?
            //System.out.println("Line: " + ((LogLineMessage) message).getData());
            // Uncomment this line to see the thread number and the actor name relationship
           //System.out.println("Thread ["+Thread.currentThread().getId()+"] handling ["+ getSelf().toString()+"]");

            // get the message payload, this will be just one line from the log file
            String messageData = ((LogLineMessage) message).getData();

            int idx = messageData.indexOf('-');
            if (idx != -1) {
                // get the ip address
                String ipAddress = messageData.substring(0, idx).trim();

                // tell the sender that we got a result using a new type of message
                this.getSender().tell(new LineProcessingResult(ipAddress), this.getSelf());
            }
        } else {
            // ignore any other message type
            this.unhandled(message);
        }
    }
}

訊息類別

  1. 文件分析消息

    公共類FileAnalysisMessage {

     private String fileName; public FileAnalysisMessage(String file) { this.fileName = file; } public String getFileName() { return fileName; } 

    }

2.文件處理的消息

public class FileProcessedMessage {

    private Map<String, Long> data;

    public FileProcessedMessage(Map<String, Long> data) {
        this.data = data;
    }

    public Map<String, Long> getData() {
        return data;
    }
}
  1. 線處理結果

    公共類LineProcessingResult {

     private String ipAddress; public LineProcessingResult(String ipAddress) { this.ipAddress = ipAddress; } public String getIpAddress() { return ipAddress; } 

    }

4.Logline消息

public class LogLineMessage {

    private String data;

    public LogLineMessage(String data) {
        this.data = data;
    }

    public String getData() {
        return data;
    }
}

我正在為文件中的每一行創建一個actor。

對於所有並發框架,在部署的並發量與每個並發單元所涉及的復雜性之間總會有一個權衡。 Akka也不例外。

在非akka方法中,每一行都有相對簡單的步驟序列:

  1. 從文件中讀取一行
  2. 用“-”分隔行
  3. 將IP地址提交到哈希圖中並增加計數

相比之下,每行的akka​​方法要復雜得多:

  1. 創建一個演員
  2. 創建一個LogLineMessage消息
  3. 發送消息給演員
  4. 用“-”分隔行
  5. 創建LineProcessingResult消息
  6. 將消息發送回協調角色
  7. 將IP地址提交到哈希圖中並增加計數

如果我們天真地假設上述每個步驟花費相同的時間,那么您將需要2個具有akka的線程才能以與1個沒有akka的線程相同的速度運行。

讓每個並發單元做更多的工作

而不是每1行有1個Actor ,而是讓每個Actor將N行處理到其自己的子哈希圖中(例如,每個Actor處理1000行):

public class LogLineMessage {

    private String[] data;

    public LogLineMessage(String[] data) {
        this.data = data;
    }

    public String[] getData() {
        return data;
    }
}

這樣,Actor就不會發送回像IP地址這樣簡單的內容。 相反,它將發送其行子集的計數哈希:

public class LineProcessingResult {

    private HashMap<String, Long> ipAddressCount;

    public LineProcessingResult(HashMap<String, Long> count) {
        this.ipAddressCount = Count;
    }

    public HashMap<String, Long> getIpAddress() {
        return ipAddressCount;
    }
}

並且協調Actor可以負責合並所有各個子計數:

//inside of FileAnalysisActor
else if (message instanceof LineProcessingResult) {
    HashMap<String,Long>  localCount = ((LineProcessingResult) message).getIpAddressCount();

    localCount.foreach((ipAddress, count) -> {
        ipMap.put(ipAddress, ipMap.getOrDefault(ipAddress, 0L) + count);
    })

然后,您可以改變N來查看特定系統的最佳性能。

不要將整個文件讀入內存

並發解決方案的另一個缺點是,它首先將整個文件讀入內存。 這是不必要的,並且對JVM造成了負擔。

而是一次讀取文件N行。 一旦將這些行存儲在內存中,就會如前所述從Actor中生成。

FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);

String[] lineBuffer;
int bufferCount = 0;
int N = 1000;

String line = br.readLine();

while(line!=null) {
    if(0 == bufferCount)
      lineBuffer = new String[N];
    else if(N == bufferCount) {
      Props props = Props.create(LogLineProcessor.class);
      ActorRef lineProcessorActor = this.getContext().actorOf(props);

      lineProcessorActor.tell(new LogLineMessage(lineBuffer),
                              this.getSelf());

      bufferCount = 0;
      continue;
    }

    lineBuffer[bufferCount] = line;
    br.readLine();
    bufferCount++;
}

//handle the final buffer
if(bufferCount > 0) {
    Props props = Props.create(LogLineProcessor.class); 
    ActorRef lineProcessorActor = this.getContext().actorOf(props);

    lineProcessorActor.tell(new LogLineMessage(lineBuffer),
                            this.getSelf());
}

這將允許文件IO,行處理和子圖組合全部並行運行。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM