簡體   English   中英

如何使用 SparkSQL 按列數據類型將 dataframe 拆分為多個數據幀?

[英]How to split dataframe into multiple dataframes by their column datatypes using SparkSQL?

下面是示例 dataframe,我想根據它們的數據類型將其拆分為多個數據幀或 rdd

ID:Int
Name:String
Joining_Date: Date

我的數據框中有 100 多列,是否有任何內置方法可以實現此邏輯?

據我所知,沒有內置功能可以實現這一點,但是這里有一種方法可以根據列類型將一個 dataframe 分成多個數據幀。

首先讓我們創建一些數據:

from pyspark.sql.functions import col
from pyspark.sql.types import StructType, StructField, StringType, LongType, DateType

df = spark.createDataFrame([
  (0, 11, "t1", "s1", "2019-10-01"), 
  (0, 22, "t2", "s2", "2019-02-11"), 
  (1, 23, "t3", "s3", "2018-01-10"), 
  (1, 24, "t4", "s4", "2019-10-01")], ["i1", "i2", "s1", "s2", "date"])

df = df.withColumn("date", col("date").cast("date"))

# df.printSchema()
# root
#  |-- i1: long (nullable = true)
#  |-- i2: long (nullable = true)
#  |-- s1: string (nullable = true)
#  |-- s2: string (nullable = true)
#  |-- date: date (nullable = true)

然后我們將前面的 dataframe 的列分組到一個字典中,其中鍵是列類型,值是與該類型對應的列的列表:

d = {}
# group cols into a dict by type
for c in df.schema:
  key = c.dataType
  if not key in d.keys(): 
    d[key] = [c.name]
  else:
    d[key].append(c.name)

d
# {DateType: ['date'], StringType: ['s1', 's2'], LongType: ['i1', 'i2']}

然后我們遍歷鍵(col 類型)並為字典的每個項目生成模式以及相應的空 dataframe:

type_dfs = {}
# create schema for each type
for k in d.keys():
  schema = StructType(
    [
      StructField(cname , k) for cname in d[k] 
    ])

  # finally create an empty df with that schema  
  type_dfs[str(k)] = spark.createDataFrame(sc.emptyRDD(), schema)

type_dfs
# {'DateType': DataFrame[date: date],
#  'StringType': DataFrame[s1: string, s2: string],
#  'LongType': DataFrame[i1: bigint, i2: bigint]}

最后,我們可以通過訪問 type_dfs 的每一項來使用生成的數據幀:

type_dfs['StringType'].printSchema()

# root
#  |-- s1: string (nullable = true)
#  |-- s2: string (nullable = true)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM