簡體   English   中英

當多個線程共享相同的spark上下文時,spark應用程序不會停止

[英]spark application does not stop when multiple threads share the same spark context

我試圖重現我面臨的問題。 我的問題陳述 - 在一個文件夾中存在多個文件。 我需要為每個文件進行字數統計並打印結果。 每個文件都應該並行處理! 當然,並行性是有限的。 我已經編寫了以下代碼來完成它。 它運行正常。集群正在安裝mapR的spark。 集群有spark.scheduler.mode = FIFO。

Q1-是否有更好的方法來完成上述任務?

Q2-我觀察到即使完成了avaialble文件的單詞計數,應用程序也不會停止。 我無法弄清楚如何處理它?

package groupId.artifactId;

import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;

public class Executor {

    /**
     * @param args
     */
    public static void main(String[] args) {    
        final int threadPoolSize = 5;       
        SparkConf sparkConf = new SparkConf().setMaster("yarn-client").setAppName("Tracker").set("spark.ui.port","0");
        JavaSparkContext jsc = new JavaSparkContext(sparkConf); 
        ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
        List<Future> listOfFuture = new ArrayList<Future>();
        for (int i = 0; i < 20; i++) {
            if (listOfFuture.size() < threadPoolSize) {
                FlexiWordCount flexiWordCount = new FlexiWordCount(jsc, i);
                Future future = executor.submit(flexiWordCount);
                listOfFuture.add(future);               
            } else {
                boolean allFutureDone = false;
                while (!allFutureDone) {
                    allFutureDone = checkForAllFuture(listOfFuture);
                    System.out.println("Threads not completed yet!");
                    try {
                        Thread.sleep(2000);//waiting for 2 sec, before next check
                    } catch (InterruptedException e) {
                        // TODO Auto-generated catch block
                        e.printStackTrace();
                    }
                }
                printFutureResult(listOfFuture);
                System.out.println("printing of future done");
                listOfFuture.clear();
                System.out.println("future list got cleared");
            }

        }
        try {
            executor.awaitTermination(5, TimeUnit.MINUTES);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        }



    private static void printFutureResult(List<Future> listOfFuture) {
        Iterator<Future> iterateFuture = listOfFuture.iterator();
        while (iterateFuture.hasNext()) {
            Future tempFuture = iterateFuture.next();
            try {
                System.out.println("Future result " + tempFuture.get());
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            } catch (ExecutionException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }
    private static boolean checkForAllFuture(List<Future> listOfFuture) {
        boolean status = true;
        Iterator<Future> iterateFuture = listOfFuture.iterator();
        while (iterateFuture.hasNext()) {
            Future tempFuture = iterateFuture.next();
            if (!tempFuture.isDone()) {
                status = false;
                break;
            }
        }
        return status;

}

package groupId.artifactId;

import java.io.Serializable;
import java.util.Arrays;
import java.util.concurrent.Callable;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;

import scala.Tuple2;

public class FlexiWordCount implements Callable<Object>,Serializable {


    private static final long serialVersionUID = 1L;
    private JavaSparkContext jsc;
    private int fileId;

    public FlexiWordCount(JavaSparkContext jsc, int fileId) {
        super();
        this.jsc = jsc;
        this.fileId = fileId;
    }
    private static class Reduction implements Function2<Integer, Integer, Integer>{
        @Override
        public Integer call(Integer i1, Integer i2) {
            return i1 + i2;
        }
    }

    private static class KVPair implements PairFunction<String, String, Integer>{
        @Override
        public Tuple2<String, Integer> call(String paramT)
                throws Exception {
            return new Tuple2<String, Integer>(paramT, 1);
        }
    }
    private static class Flatter implements FlatMapFunction<String, String>{

        @Override
        public Iterable<String> call(String s) {
            return Arrays.asList(s.split(" "));
        }
    }
    @Override
    public Object call() throws Exception { 
        JavaRDD<String> jrd = jsc.textFile("/root/folder/experiment979/" + fileId +".txt");
        System.out.println("inside call() for fileId = " + fileId);
        JavaRDD<String> words = jrd.flatMap(new Flatter());
        JavaPairRDD<String, Integer> ones = words.mapToPair(new KVPair());      
        JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Reduction());
        return counts.collect();
    }
}
}

為什么程序不會自動關閉?

Ans:您還沒有關閉Sparkcontex,嘗試將main方法更改為:

public static void main(String[] args) {    
    final int threadPoolSize = 5;       
    SparkConf sparkConf = new SparkConf().setMaster("yarn-client").setAppName("Tracker").set("spark.ui.port","0");
    JavaSparkContext jsc = new JavaSparkContext(sparkConf); 
    ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
    List<Future> listOfFuture = new ArrayList<Future>();
    for (int i = 0; i < 20; i++) {
        if (listOfFuture.size() < threadPoolSize) {
            FlexiWordCount flexiWordCount = new FlexiWordCount(jsc, i);
            Future future = executor.submit(flexiWordCount);
            listOfFuture.add(future);               
        } else {
            boolean allFutureDone = false;
            while (!allFutureDone) {
                allFutureDone = checkForAllFuture(listOfFuture);
                System.out.println("Threads not completed yet!");
                try {
                    Thread.sleep(2000);//waiting for 2 sec, before next check
                } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
            printFutureResult(listOfFuture);
            System.out.println("printing of future done");
            listOfFuture.clear();
            System.out.println("future list got cleared");
        }

    }
    try {
        executor.awaitTermination(5, TimeUnit.MINUTES);
    } catch (InterruptedException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
   jsc.stop()
    }

有沒有更好的辦法 ?

Ans:是的,您應該將文件目錄傳遞給sparkcontext並在目錄上使用.textFile,在這種情況下,spark會對執行程序上的目錄讀取進行並行化。 如果您嘗試自己創建線程然后使用相同的spark上下文為每個文件重新提交作業,則會增加將應用程序提交到yarn隊列的額外開銷。

我認為最快的方法是直接傳遞整個目錄並從中創建RDD,然后讓spark啟動並行任務來處理不同執行程序中的所有文件。你可以嘗試在RDD上使用.repartition()方法,因為它會啟動許多任務並行運行。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM