简体   繁体   中英

very slow spark performance

I am a newbie to spark and need some help to debug very slow performance in spark. I am doing below transformations and its been running for more than 2 hours.

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext( sc )
hiveContext: org.apache.spark.sql.hive.HiveContext =      org.apache.spark.sql.hive.HiveContext@2b33f7a0
scala> val t1_df = hiveContext.sql("select * from T1" )

scala> t1_df.registerTempTable( "T1" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> t1_df.count
17/06/07 07:26:51 WARN util.Utils: Truncated the string representation of a    plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
res3: Long = 1732831

scala> val t1_df1 = t1_df.dropDuplicates( Array("c1","c2","c3", "c4" ))

scala> df1.registerTempTable( "ABC" )
warning: there was one deprecation warning; re-run with -deprecation for details

scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from ABC )" ).count
[Stage 4:====================================================>    (89 + 8) / 97]

I am using spark2.1.0 and reading data from hive.2.1.1 on amazon VMs cluster of 7 nodes each with 250GB RAM and 64 virtual cores. With this massive resource, i am expecting this simple query on 1.7 mil recs to fly but its painfully slow. Any pointers would be of great help.

UPDATES: Adding explain plan:

 scala> hiveContext.sql( "select * from T1 where c1 not in ( select c1 from    ABC )" ).explain
    == Physical Plan ==
    BroadcastNestedLoopJoin BuildRight, LeftAnti, (isnull((c1#26 = c1#26#1398))   || (c1#26 = c1#26#1398))
:- FileScan parquet default.t1_pq[cols
 more fields] Batched: false, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<hdr_msg_src:string,hdr_recv_tsmp:timestamp,hdr_desk_id:string,execprc:string,dreg:string,c...
+- BroadcastExchange IdentityBroadcastMode
   +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
      +- Exchange hashpartitioning(c1#26, c2#59, c3#60L, c4#82, 200)
         +- *HashAggregate(keys=[c1#26, c2#59, c3#60L, c4#82], functions=[])
            +- *FileScan parquet default.atn_load_pq[c1#26,c2#59,c3#60L,c4#82] Batched: true, Format: Parquet, Location: InMemoryFileIndex[hdfs://<hostname>/user/hive/warehouse/atn_load_pq], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string,c3:bigint,c4:string>

Although I think your count will always be 0 in your query, you may try to use an left-anti join, and don't forget to cache t1_df to avoid multiple re-computations

val t1_df = hiveContext.sql("select * from T1" ).cache

t1_df
   .join(
     t1_df.dropDuplicates( Array("c1","c2","c3", "c4" )),
     Seq("c1"),
     "leftanti"
   )
   .count()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM