简体   繁体   中英

Spark best approach Look-up Dataframe to improve performance

Dataframe A (millions of records) one of the column is create_date,modified_date

Dataframe B 500 records has start_date and end_date

Current approach:

Select a.*,b.* from a join b on a.create_date between start_date and end_date

The above job takes half hour or more to run.

how can I improve the performance

激发工作细节

在此处输入图片说明

DataFrames currently doesn't have an approach for direct joins like that. It will fully read both tables before performing a join.

https://issues.apache.org/jira/browse/SPARK-16614

You can use the RDD API to take advantage of the joinWithCassandraTable function

https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md#using-joinwithcassandratable

As others suggested, one of the approach is to broadcast the smaller dataframe. This can be done automatically also by configuring the below parameter.

spark.sql.autoBroadcastJoinThreshold

If the dataframe size is smaller than the value specified here, Spark automatically broadcasts the smaller dataframe instead of performing a join. You can read more about this here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM