简体   繁体   中英

How to compare Data Types and columns in 2 Data frames in PySpark

I have two data frames in pyspark df_1 and df2. The schema are like below

>>> df1.printSchema()
root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- address: string (nullable = true)
 |-- Zip: decimal(18,2)(nullable = true)


>>> df2.printSchema()
root 
 |-- id: integer (nullable = true)
 |-- name: string (nullable = true)
 |-- address: string (nullable = true)
 |-- Zip: decimal(9,2)(nullable = true)
 |-- nation: string (nullable = true)

Now I want to compare columns from both the Data Frames for column and Data Type difference.

How can we achieve that in pyspark.

EXPECTED OUTPUT:

Columns:

ID  COl_Name DataFrame 
1    Nation  df2

Data Types:

ID Col_Name DF1            DF2
1  id       None          None 
2  name     None          None 
3  address  None          None 
4  Zip      Decimal(18,2) Decimal(9,2)
5  nation   None          None 

type1.printSchema()
root
 |-- col_name: string (nullable = true)
 |-- dtype: string (nullable = true)
 |-- dataframe: string (nullable = false)

type2.printSchema()
root
 |-- col_name: string (nullable = true)
 |-- dtype: string (nullable = true)
 |-- dataframe: string (nullable = false)

result2.show()
+--------+----+----+
|col_name| df1| df2|
+--------+----+----+
| movieId|null|null|
|   title|null|null|
|     zip|null|null|
|  genres|null|null|
+--------+----+----+

type1.show()
+--------+------+---------+
|col_name| dtype|dataframe|
+--------+------+---------+
| movidId|   int|      df1|
|  string|string|      df1|
|  genres|string|      df1|
|     zip|string|      df1|
+--------+------+---------+

type2.show()
+--------+------+---------+
|col_name| dtype|dataframe|
+--------+------+---------+
| movidId|   int|      df2|
|  string|string|      df2|
|  genres|string|      df2|
|     zip|   int|      df2|
+--------+------+---------+

You can create dataframes of the column data types and operate on them to get the desired results. I used spark dataframes here, but I guess pandas should also work.

import pyspark.sql.functions as F

type1 = spark.createDataFrame(
    df1.dtypes, 'col_name string, dtype string'
).withColumn('dataframe', F.lit('df1'))

type2 = spark.createDataFrame(
    df2.dtypes, 'col_name string, dtype string'
).withColumn('dataframe', F.lit('df2'))

result1 = type1.join(type2, 'col_name', 'left_anti').unionAll(
    type2.join(type1, 'col_name', 'left_anti')
).drop('dtype')

result1.show()
+--------+---------+
|col_name|dataframe|
+--------+---------+
|  nation|      df2|
+--------+---------+

result2 = type1.join(type2, 'col_name', 'full').select(
    'col_name', 
    F.when(type1.dtype != type2.dtype, type1.dtype).alias('df1'), 
    F.when(type1.dtype != type2.dtype, type2.dtype).alias('df2')
)

result2.show()
+--------+-------------+------------+
|col_name|          df1|         df2|
+--------+-------------+------------+
|    name|         null|        null|
|  nation|         null|        null|
|     Zip|decimal(18,2)|decimal(9,2)|
|      id|         null|        null|
| address|         null|        null|
+--------+-------------+------------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM