I need to check my solution for idempotency and check how much it's different with past solution.
I tried next:
spark.sql('''
select * from t1
except
select * from t2
''').count()
It's gives me information how much this tables different (t1 - my solution, t2 - primal data). If here is many different data, I want to check, where it different.
So, I tried that:
diff = {}
columns = t1.columns
for col in columns:
cntr = spark.sql('''
select {col} from t1
except
select {col} from t2
''').count()
diff[col] = cntr
print(diff)
It's not good for me, because it's works about 1-2 hours (both tables have 30 columns and 30 million lines of data).
Do you guys have an idea how to calculate this quickly?
Except is a kind of a join on all columns at the same time. Does your data have a primary key? It could even be complex, comprising of multiple columns, but it's still much better then taking all 30 columns into account.
Once you figure out the primary key you can do the FULL OUTER JOIN
and:
NULL
s on the leftNULL
s on the rightGiven that your resource remains unchanged, I think there are three ways that you can optimize:
except
: I assume your dataset should have a key / index, otherwise there is no ordering in your both dataframe and you can't perform except
to check the difference. Unless you have limited resource, just do join
once to concat two dataframe first instead of multiple except
.repartition
. If you're using a sequence number, use repartitionByRange
.df.select(func.sum(func.when(func.col('df1.col_a').=func.col('df2,col_a'). func.lit(1))).otherwise(func.lit(0)).alias('diff_in_col_a_count'))
. Therefore, you can calculate all the difference within one action but not multiple action.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.