簡體   English   中英

使用生產者讀取大量文件會導致CPU使用率為100%

[英]Reading lots of files using producer causes CPU usage to be 100%

我編寫了一個簡單的消費者-生產者模式來幫助我完成以下任務:

  1. 從包含約500,000 TSV(制表符分隔)文件的目錄中讀取文件。
  2. 將每個文件處理為數據結構,並將其放入阻塞隊列。
  3. 使用使用者和查詢數據庫來使用隊列。
  4. 比較兩個哈希圖,如果有差異,則將差異打印到文件中。

當我運行程序時,即使有5個線程,我的CPU消耗也飆升至100%。 難道是因為我使用一個生產者來讀取文件?

文件示例(制表符分隔)

Column1   Column2   Column3   Column 4   Column5
A         1         *         -          -
B         1         *         -          -
C         1         %         -          -

制片人

public class Producer implements Runnable{
private BlockingQueue<Map<String, Map<String, String>>> m_Queue;
private String m_Directory;

public Producer(BlockingQueue<Map<String, Map<String, String>>> i_Queue, String i_Directory)
{
    m_Queue = i_Queue;
    m_Directory = i_Directory;
}

@Override
public void run()
{
    if (Files.exists(Paths.get(m_Directory)))
    {
        File[] files = new File(m_Directory).listFiles();

        if (files != null)
        {
            for (File file : files)
            {
                Map<String, String> map = new HashMap<>();
                try (BufferedReader reader = new BufferedReader(new FileReader(file)))
                {
                    String line, lastcolumn3 = "", column1 = "", column2 = "", column3 = "";
                    while ((line = reader.readLine()) != null)
                    {
                        //Skip column header
                        if (!Character.isLetter(line.charAt(0)))
                        {
                            String[] splitLine = line.split("\t");

                            column1 = splitLine[0].replace("\"", "");
                            column2 = splitLine[1].replace("\"", "");
                            column3 = splitLine[2].replace("\"", "");

                            if (!lastcolumn3.equals(column3))
                            {
                                map.put(column3, column1);
                                lastcolumn3 = column3;
                            }
                        }
                    }

                    map.put(column3, column1);

                    //Column 1 is always the same per file, it'll be the key. Column2 and Column3 are stored as the value (as a key-value pair)
                    Map<String, Map<String, String>> mapPerFile = new HashMap<>();
                    mapPerFile.put(column2, map);

                    m_Queue.put(mapPerFile);
                }
                catch (IOException | InterruptedException e)
                {
                    System.out.println(file);
                    e.printStackTrace();
                }
            }
        }
    }
}}

消費者

public class Consumer implements Runnable{
private HashMap<String, String> m_DBResults;
private BlockingQueue<Map<String, Map<String, String>>> m_Queue;
private Map<String, Map<String, String>> m_DBResultsPerFile;
private String m_Column1;
private int m_ThreadID;

public Consumer(BlockingQueue<Map<String, Map<String, String>>> i_Queue, int i_ThreadID)
{
    m_Queue = i_Queue;
    m_ThreadID = i_ThreadID;
}

@Override
public void run()
{
    try
    {
        while ((m_DBResultsPerFile = m_Queue.poll()) != null)
        {
            //Column1 is always the same, only need the first entry.
            m_Column1 = m_DBResultsPerFile.keySet().toArray()[0].toString();

            //Queries DB and puts returned data into m_DBResults
            queryDB(m_Column1);

            //Write the difference, if any, per thread into a file.
            writeDifference();
        }
    }
    catch (Exception e)
    {
        e.printStackTrace();
    }
}

private void writeDifference()
{
    MapDifference<String, String> difference = Maps.difference(m_DBResultsPerFile.get(m_Column1), m_DBResults);

    if (difference.entriesOnlyOnLeft().size() > 0 || difference.entriesOnlyOnRight().size() > 0)
    {
        try (BufferedWriter writer = new BufferedWriter(new FileWriter(String.format("thread_%d.tsv", m_ThreadID), true)))
        {
            if (difference.entriesOnlyOnLeft().size() > 0)
            {
                writer.write(String.format("%s\t%s\t", "Missing", m_Column1));
                for (Map.Entry<String, String> entry : difference.entriesOnlyOnLeft().entrySet())
                {
                    writer.write(String.format("[%s,%s]; ", entry.getKey(), entry.getValue()));
                }

                writer.write("\n");
            }
            if (difference.entriesOnlyOnRight().size() > 0)
            {
                writer.write(String.format("%s\t%s\t", "Extra", m_Column1));
                for (Map.Entry<String, String> entry : difference.entriesOnlyOnRight().entrySet())
                {
                    writer.write(String.format("[%s,%s]; ", entry.getKey(), entry.getValue()));
                }

                writer.write("\n");
            }
        }
        catch (IOException e)
        {
            e.printStackTrace();
        }
    }
}}

主要

public static void main(String[]args) {
BlockingQueue<Map<String, Map<String,String>>> queue = new LinkedBlockingQueue <> ();

//Start the reader thread.
threadPool.execute(new Producer(queue, args[0]));

//Create configurable threads.
for (int i = 0; i < 10; i++) {
    threadPool.execute(new Consumer(queue, i + 1));
}

threadPool.shutdown();
System.out.println("INFO: Shutting down threads.");

try {
    threadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
    System.out.println("INFO: Threadpool terminated successfully.");
} catch (InterruptedException e) {
    e.printStackTrace();
}}

您的CPU使用率很可能是由於以下原因:

while ((m_DBResultsPerFile = m_Queue.poll()) != null)

poll方法不會阻止。 它立即返回。 因此,您每秒執行該循環數百萬次。

您應該使用take() ,實際上要等到某個元素可用為止:

while ((m_DBResultsPerFile = m_Queue.take()) != null)

BlockingQueue文檔很好地總結了所有這些內容,從而(消除了我的困惑)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM