繁体   English   中英

非常慢的火花性能

[英]very slow spark performance

我是Spark的新手,需要一些帮助来调试Spark中非常慢的性能。 我正在做下面的转换,它已经运行了两个多小时。

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext( sc )
hiveContext: org.apache.spark.sql.hive.HiveContext =      org.apache.spark.sql.hive.HiveContext@2b33f7a0
scala> val t1_df = hiveContext.sql("select * from T1" )

scala> t1_df.registerTempTable( "T1" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> t1_df.count
17/06/07 07:26:51 WARN util.Utils: Truncated the string representation of a    plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
res3: Long = 1732831

scala> val t1_df1 = t1_df.dropDuplicates( Array("c1","c2","c3", "c4" ))

scala> df1.registerTempTable( "ABC" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from ABC )" ).count
[Stage 4:====================================================>    (89 + 8) / 97]

我正在使用spark2.1.0,并在7个节点的亚马逊VM集群上分别从hive.2.1.1读取数据,每个节点具有250GB RAM和64个虚拟核。 有了如此庞大的资源,我希望这个简单的查询达到170万Recs,但速度却很慢。 任何指针都会有很大帮助。

更新:添加说明计划:

 scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from    ABC )" ).explain
    == Physical Plan ==
    BroadcastNestedLoopJoin BuildRight, LeftAnti, (isnull((c1#26 = c1#26#1398))   || (c1#26 = c1#26#1398))
:- FileScan parquet default.t1_pq[cols
 more fields] Batched: false, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<hdr_msg_src:string,hdr_recv_tsmp:timestamp,hdr_desk_id:string,execprc:string,dreg:string,c...
+- BroadcastExchange IdentityBroadcastMode
   +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
      +- Exchange hashpartitioning(c1#26, c2#59, c3#60L, c4#82, 200)
         +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
            +- *FileScan parquet default.atn_load_pq[c1#26,c2#59,c3#60L,c4#82] Batched: true, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string,c3:bigint,c4:string>

尽管我认为您的查询中的计数始终为0,但是您可以尝试使用左反连接,并且不要忘记缓存t1_df以避免多次重新计算

val t1_df = hiveContext.sql("select * from T1" ).cache

t1_df
   .join(
     t1_df.dropDuplicates( Array("c1","c2","c3", "c4" )),
     Seq("c1"),
     "leftanti"
   )
   .count()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM