簡體   English   中英

PySpark 到 Pandas Dataframe 轉換:轉換時數據類型錯誤

[英]PySpark to Pandas Dataframe conversion :Error in data types while converting

假設我在 spark 中有一個 dataframe abc ,如下所示:

ID    Trxn_Date    Order_Date    Sales_Rep  Order_Category   Sales_Amount   Discount
100   2021-03-24   2021-03-17    Mathew     DailyStaples       1000           1.50
133   2021-01-22   2021-01-12    Camelia    Medicines          2000           0.50

客觀的:

為每種數據類型隨機選取一列,並逐列查找其最小值和最大值。

  - For `numerical` column it should also compute the sum or average.
  - For `string` column it should compute the maximum and minimum length 

使用以下結構創建另一個 dataframe:

Table_Name        Column_Name         Min              Max     Sum
  abc              Trxn_Date      2021-01-22   2021-03-24
  abc              Sales_Rep            6              7              <----str.len('Mathew') = 6 and that of 'Camelia' is 7
  abc              Sales_Amount       1000            2000     3000

我正在使用以下代碼,但它正在拾取所有列 此外,當我在databrics / PySpark環境中運行它時,我收到如下錯誤。

table_lst = ['table_1','table_2']
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
df_list = []
for i in table_lst:
  sdf_i = spark.sql("SELECT * FROM schema_name.{0}".format(i))
  df_i = sdf_i.select("*").toPandas()
df_list.append(df_i)
d = {}
for i,j in zip(table_name,dfs):
   d[i] = j
df_concat = []
for k,v in d.items():
   val_df = {}
   for i,j in zip(v.columns,v.dtypes):
     if 'object' in str(j):
        max_s = v[i].map(len).max()
        min_s = v[i].map(len).min()
        val_df[k+'-'+i+'_'+'Max_String_L']= max_s
        val_df[k+'-'+i+'_'+'Min_String_L']= min_s
     elif 'int' or 'float' in str(j):
        max_int = v[i].max()      <------Error line as indicated in Databricks
        min_int = v[i].min()
        val_df[k+'-'+i+'Max_Num'] = max_int
        val_df[k+'-'+i+'_'+'Min_Num'] = min_int
     elif 'datetime' in str(j):
        max_date = v[i].max()
        min_date = v[i].min()
        val_df[k+'-'+i+'_'+'Max_Date'] = max_date
        val_df[k+'-'+i+'_'+'Min_Date'] = min_date
     else:
        print('left anythg?')
  df_f_d = pd.DataFrame.from_dict(val_df,orient='index').reset_index()
  df_concat.append(df_f_d)

當我在 databrics pyspark 上運行此代碼時,出現以下錯誤:

 TypeError: '>=' not supported between instances of 'float' and 'str' 

此外,如上所示,上述代碼不會拋出結果 dataframe。

我懷疑在將sparkDF轉換為pandas時,所有數據類型都被轉換為string

那么,如何解決這個問題呢? 也可以修改上面的代碼以實現目標嗎?

我的解決方案有點長(可能根據要求預期),您可以根據需要在每個步驟中對其進行調試。 總體思路是

  1. 區分哪一列是字符串,哪一列是數字
  2. 使用describe function 來獲得每列的最小最大值。
  3. 但是describe不計算 sum 和 average 所以我們必須單獨聚合
# a.csv
# ID,Trxn_Date,Order_Date,Sales_Rep,Order_Category,Sales_Amount,Discount
# 100,2021-03-24,2021-03-17,Mathew,DailyStaples,1000,1.50
# 133,2021-01-22,2021-01-12,Camelia,Medicines,2000,0.50

from pyspark.sql import functions as F

df = spark.read.csv('a.csv', header=True, inferSchema=True
all_cols = [col[0] for col in df.dtypes]
date_cols = ['Trxn_Date', 'Order_Date'] # Spark doesn't infer DateType so I have to handle it manually. You can ignore if your original schema already has it.
str_cols = [col[0] for col in df.dtypes if col[1] == 'string' and col[0] not in date_cols]
num_cols = [col[0] for col in df.dtypes if col[1] in ['int', 'double']]

# replace actual string values with its length
for col in str_cols:
    df = df.withColumn(col, F.length(col))

# calculate min max and transpose dataframe
df1 = (df
    .describe()
    .where(F.col('summary').isin('min', 'max'))
    .withColumn('keys', F.array([F.lit(c) for c in all_cols]))
    .withColumn('values', F.array([F.col(c) for c in all_cols]))
    .withColumn('maps', F.map_from_arrays('keys', 'values'))
    .select('summary', F.explode('maps').alias('col', 'value'))
    .groupBy('col')
    .agg(
        F.collect_list('summary').alias('keys'),
        F.collect_list('value').alias('values')
    )
    .withColumn('maps', F.map_from_arrays('keys', 'values'))
    .select('col', 'maps.min', 'maps.max')
)
df1.show(10, False)
# +--------------+----------+----------+
# |col           |min       |max       |
# +--------------+----------+----------+
# |Sales_Amount  |1000      |2000      |
# |Sales_Rep     |6         |7         |
# |Order_Category|9         |12        |
# |ID            |100       |133       |
# |Discount      |0.5       |1.5       |
# |Trxn_Date     |2021-01-22|2021-03-24|
# |Order_Date    |2021-01-12|2021-03-17|
# +--------------+----------+----------+

# calculate sum and transpose dataframe
df2 = (df
    .groupBy(F.lit(1).alias('sum'))
    .agg(*[F.sum(c).alias(c) for c in num_cols])
    .withColumn('keys', F.array([F.lit(c) for c in num_cols]))
    .withColumn('values', F.array([F.col(c) for c in num_cols]))
    .withColumn('maps', F.map_from_arrays('keys', 'values'))
    .select(F.explode('maps').alias('col', 'sum'))
)
df2.show(10, False)
# +------------+------+
# |col         |sum   |
# +------------+------+
# |ID          |233.0 |
# |Sales_Amount|3000.0|
# |Discount    |2.0   |
# +------------+------+

# Join them together to get final dataframe
df1.join(df2, on=['col'], how='left').show()
# +--------------+----------+----------+------+
# |           col|       min|       max|   sum|
# +--------------+----------+----------+------+
# |  Sales_Amount|      1000|      2000|3000.0|
# |     Sales_Rep|         6|         7|  null|
# |Order_Category|         9|        12|  null|
# |            ID|       100|       133| 233.0|
# |      Discount|       0.5|       1.5|   2.0|
# |     Trxn_Date|2021-01-22|2021-03-24|  null|
# |    Order_Date|2021-01-12|2021-03-17|  null|
# +--------------+----------+----------+------+

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM