简体   繁体   English

它对火花RDD联盟来说非常缓慢

[英]it is very slow for spark RDD union

I have 2 spark RDDs, dataRDD and newPairDataRDD which are used for spark SQL query. 我有2个spark RDD,dataRDD和newPairDataRDD,它们用于spark SQL查询。 when my application init, the dataRDD will be initialized. 当我的应用程序初始化时,dataRDD将被初始化。 All data in one specified hbase entity will be stored to dataRDD. 一个指定的hbase实体中的所有数据都将存储到dataRDD。

When client's sql query comes, my APP will get all the new updates and inserts to newPairDataRDD. 当客户端的SQL查询到来时,我的APP将获得所有新的更新并插入newPairDataRDD。 the dataRDD union the newPairDataRDD and register as table in spark SQL context. dataRDD联合newPairDataRDD并在spark SQL上下文中注册为表。

I found even 0 record in dataRDD and 1 new inserted record in newPairDataRDD. 我在dataRDD中找到了0条记录,在newPairDataRDD中找到了1条新的插入记录。 It will takes 4 seconds for union. 工会需要4秒钟。 That's too slow 那太慢了

I think it is not reasonable. 我认为这是不合理的。 Anyone knows how to make it quicker? 谁知道怎么做得更快? Thanks simple code as below 谢谢简单的代码如下

    // Step1: load all data from hbase to dataRDD when initial, this only run once. 
    JavaPairRDD<String, Row>  dataRDD= getAllBaseDataToJavaRDD();
    dataRDD.cache();
    dataRDD.persist(StorageLevel.MEMORY_ONLY());
    logger.info(dataRDD.count());

    // Step2: when spark sql query coming, load latest updated and inserted data from db to newPairDataRDD

    JavaPairRDD<String, Row> newPairDataRDD = getUpdateOrInstertBaseDataToJavaRDD();
    // Step3: if count>0 do union and reduce

       if(newPairDataRDD.count() > 0) {

        JavaPairRDD<String, Row> unionedRDD =dataRDD.union(newPairDataRDD);

    // if data was updated in DB, need to delete the old version from the dataRDD.

        dataRDD = unionedRDD.reduceByKey(
            new Function2<Row, Row, Row>() {
            // @Override
            public Row call(Row r1, Row r2) {
             return r2;
             }
            });
    }
//step4: register the dataRDD
JavaSchemaRDD schemaRDD = sqlContext.applySchema(dataRDD..values(), schema);

//step5: execute sql query
retRDD = sqlContext.sql(sql);
List<org.apache.spark.sql.api.java.Row> rows = retRDD.collect();

From the spark web ui, I can see below. 从火花web ui,我可以在下面看到。 Apparently it needs 4s for union 显然它需要4s才能结合

Completed Stages (8) 完成阶段(8)

StageId Description Submitted Duration Tasks: Succeeded/Total Input Shuffle Read Shuffle Write StageId描述提交的持续时间任务:成功/总输入随机读取随机写入

6 collect at SparkPlan.scala:85+details 1/4/2015 8:17 2 s 8-Aug 156.0 B 6收集于SparkPlan.scala:85 +详情2015年4月4日8:17 2 s 8-Aug 156.0 B.

7 union at SparkSqlQueryForMarsNew.java:389+details 1/4/2015 8:17 4 s 8-Aug 64.0 B 156.0 B 7联盟SparkSqlQueryForMarsNew.java:389+details 1/4/2015 8:17 4 s 8-Aug 64.0 B 156.0 B

A more efficient way to achieve what you want is to use a cogroup() and a flatMapValues() , using a union does very little except add new partitions to the dataRDD , meaning all the data must be shuffled before the reduceByKey() . 实现所需的更有效方法是使用flatMapValues() cogroup()flatMapValues() ,除了向dataRDD添加新分区外,使用union几乎没有,这意味着必须在reduceByKey()之前对所有数据进行混洗。 A cogroup() and flatMapValues() will cause repartitioning of only the newPairDataRDD . flatMapValues() cogroup()flatMapValues()将仅导致newPairDataRDD重新分区。

JavaPairRDD<String, Tuple2<List<Row>, List<Row>>> unionedRDD = dataRDD.cogroup(newPairDataRDD);
JavaPairRDD<String, Row> updated = unionedRDD.flatMapValues(
    new Function<Tuple2<List<Row>, List<Row>>, Iterable<Row>>() {
        public Iterable<Row> call(Tuple2<List<Row>, List<Row>> grouped) {
            if (grouped._2.nonEmpty()) {
                return grouped._2;
            } else {
                return grouped._1;
            }
        }
    });

Or in Scala 或者在斯卡拉

val unioned = dataRDD.cogroup(newPairDataRDD)
val updated = unioned.flatMapValues { case (oldVals, newVals) =>
    if (newVals.nonEmpty) newVals else oldVals
}

Disclaimer, I'm not used to writing spark in Java! 免责声明,我不习惯用Java编写火花! Please someone correct me if the above is wrong! 如果以上错误,请有人纠正我!

Try repartitioning your RDDs: 尝试重新分区您的RDD:

JavaPairRDD unionedRDD =dataRDD.repartition(sc.defaultParallelism * 3).union(newPairDataRDD.repartition(sc.defaultParallelism * 3)); JavaPairRDD unionedRDD = dataRDD.repartition(sc.defaultParallelism * 3).union(newPairDataRDD.repartition(sc.defaultParallelism * 3));

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM