简体   繁体   中英

Optimizing Apache Spark SQL Queries

I am facing very long latencies on Apache Spark when running some SQL queries. In order to simplify the query, I run my calculations in a sequential manner: The output of each query is stored as a temporary table (.registerTempTable('TEMP')) so it can be used in the following SQL query and so on... But the query takes too much time, while in 'Pure Python' code, it takes just a few minutes.

 sqlContext.sql(""" SELECT PFMT.* , DICO_SITES.CodeAPI FROM PFMT INNER JOIN DICO_SITES ON PFMT.assembly_department = DICO_SITES.CodeProg """).registerTempTable("PFMT_API_CODE") sqlContext.sql(""" SELECT GAMMA.*, (GAMMA.VOLUME*GAMMA.PRORATA)/100 AS VOLUME_PER_SUPPLIER FROM (SELECT PFMT_API_CODE.* , SUPPLIERS_PROP.CODE_SITE_FOURNISSEUR, SUPPLIERS_PROP.PRORATA FROM PFMT_API_CODE INNER JOIN SUPPLIERS_PROP ON PFMT_API_CODE.reference = SUPPLIERS_PROP.PIE_NUMERO AND PFMT_API_CODE.project_code = SUPPLIERS_PROP.FAM_CODE AND PFMT_API_CODE.CodeAPI = SUPPLIERS_PROP.SITE_UTILISATION_FINAL) GAMMA """).registerTempTable("TEMP_ONE") sqlContext.sql(""" SELECT TEMP_ONE.* , ADCP_DATA.* , CASE WHEN ADCP_DATA.WEEK <= weekofyear(from_unixtime(unix_timestamp())) + 24 THEN ADCP_DATA.CAPACITY_ST + ADCP_DATA.ADD_CAPACITY_ST WHEN ADCP_DATA.WEEK > weekofyear(from_unixtime(unix_timestamp())) + 24 THEN ADCP_DATA.CAPACITY_LT + ADCP_DATA.ADD_CAPACITY_LT END AS CAPACITY_REF FROM TEMP_ONE INNER JOIN ADCP_DATA ON TEMP_ONE.reference = ADCP_DATA.PART_NUMBER AND TEMP_ONE.CodeAPI = ADCP_DATA.API_CODE AND TEMP_ONE.project_code = ADCP_DATA.PROJECT_CODE AND TEMP_ONE.CODE_SITE_FOURNISSEUR = ADCP_DATA.SUPPLIER_SITE_CODE AND TEMP_ONE.WEEK_NUM = ADCP_DATA.WEEK_NUM """ ).registerTempTable('TEMP_BIS') sqlContext.sql(""" SELECT TEMP_BIS.CSF_ID, TEMP_BIS.CF_ID , TEMP_BIS.CAPACITY_REF, TEMP_BIS.VOLUME_PER_SUPPLIER, CASE WHEN TEMP_BIS.CAPACITY_REF >= VOLUME_PER_SUPPLIER THEN 'CAPACITY_OK' WHEN TEMP_BIS.CAPACITY_REF < VOLUME_PER_SUPPLIER THEN 'CAPACITY_NOK' END AS CAPACITY_CHECK FROM TEMP_BIS """).take(100) 

Could anyone highlight (if there are any) the best practices for writing pyspark SQL queries on Spark? Does it make sense that locally on my computer the script is much faster than on the Hadoop cluster? Thanks in advance

You should cache your intermediate results, what is the data source? can you retrieve only relevant data from it or only relevant columns. There are many options you should provide more info about your data.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM