简体   繁体   English

大清单FlatMap Java Spark

[英]Large list FlatMap Java Spark

I have a large list in JavaPairRDD<Integer, List<String>> and I want to do a flatMap to get all possible combinations of list entries so that I end up with JavaPairRDD<Integer, Tuple2<String,String>> . 我在JavaPairRDD<Integer, List<String>>有一个很大的列表,我想做一个flatMap来获取列表条目的所有可能组合,这样我最终得到JavaPairRDD<Integer, Tuple2<String,String>> Basically if i have something like 基本上如果我有类似的东西

(1, ["A", "B", "C"])

I want to get: 我想得到:

(1, <"A","B">) (1, <"A", "C">) (1, <"B", "C")

The problem is with large lists as what I have done is created a large list of Tuple2 objects by having a nested loop over the input list. 问题在于大型列表,因为我所做的是通过在输入列表上嵌套循环来创建大型Tuple2对象列表。 Sometimes this list does not fit in memory. 有时这个列表不适合内存。 I found this, but not sure how to implement it in Java: Spark FlatMap function for huge lists 我发现了这一点,但不确定如何在Java中实现它: Spark FlatMap函数用于巨大的列表

You may want to flatMap the list and then join the RDD on itself before filtering equal values: 您可能希望对列表进行flatMap ,然后在过滤相等值之前将RDD到自身:

JavaPairRDD<Integer, List<String>> original = // ...
JavaPairRDD<Integer, String> flattened = original.flatMapValues(identity());
JavaPairRDD<Integer, Tuple2<String, String>> joined = flattened.join(flattened);
JavaPairRDD<Integer, Tuple2<String, String>> filtered = 
    joined.filter(new Function<Tuple2<Integer, Tuple2<String, String>>, Boolean> () {
        @Override
        public Boolean call(Tuple2<Integer, Tuple2<String, String>> kv) throws Exception {
            return kv._2()._1().equals(kv._2()._2());
        }
    });

depends on how big of your datasets, in my job it usually have to process 100-200GB datasets used the FlatMap and flatMapToPair both is works properly for high intensive computation. 取决于你的数据集有多大,在我的工作中它通常需要处理100-200GB数据集使用FlatMap和flatMapToPair两者都适用于高密集计算。 example below 以下示例

JavaPairRDD<Integer, List<String>>= DatasetsRDD.
    .flatMapToPair(x->{
    return xx;
    });

Also if your datasets are huge you could try to use spark persistance to disk 此外,如果您的数据集很大,您可以尝试使用spark persistance to disk

Storage Level   

    MEMORY_ONLY
    MEMORY_ONLY_SER 
    MEMORY_AND_DISK_SER 
    DISK_ONLY
    MEMORY_ONLY_2

References: https://spark.apache.org/docs/latest/rdd-programming-guide.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM