[英]Convert a query from spark.sql to impala
我在 pyspark 中有以下查詢:
spark.sql= ("select id, track_id, data_source
from db.races
where dt_date = 20201010")
.groupBy("id", "track_id", "data_source")
.agg(cnt('*').alias("num_races"))
.withColumn('last_num_id', col('id').substr(-1,1))
.withColumn('last_num_track_id', col('track_id').substr(-1,1))
.withColumn("status_date", lit(previous_date))
我想將其轉換為 impala 查詢。
到目前為止我的嘗試:
select id, track_id, data_source
from db.races
group by id, track_id, data_source
...
我可以理解直到groupBy
的一部分,但之后我無法准確理解這些 pyspark 函數是如何轉換的。
不熟悉 Impala,但這是我編寫 SQL 查詢的嘗試:
select
t.*,
substr(t.id, -1, 1) as last_num_id,
substr(t.track_id, -1, 1) as last_num_track_id,
'(put the previous_date here)' as status_date
from (
select id, track_id, data_source, count(*) as num_races
from db.races
where dt_date = 20201010
group by id, track_id, data_source
) as t
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.