[英]How can I convert an empty pandas dataframe to Pyspark dataframe?
I'd like a safe way to convert a pandas dataframe to a pyspark dataframe which can handle cases where the pandas dataframe is empty (lets say after some filter has been applied). I'd like a safe way to convert a pandas dataframe to a pyspark dataframe which can handle cases where the pandas dataframe is empty (lets say after some filter has been applied). For example the following will fail:例如以下将失败:
Assumes you have a spark session假设您有火花 session
import pandas as pd
raw_data = []
cols = ['col_1', 'col_2', 'col_3']
types_dict = {
'col_1': str,
'col_2': float,
'col_3': bool
}
pandas_df = pd.DataFrame(raw_data, columns=cols).astype(types_dict)
spark_df = spark.createDataframe(pandas_df)
Resulting error: ValueError: can not infer schema from empty dataset
结果错误: ValueError: can not infer schema from empty dataset
One option is to build a function which could iterate through the pandas dtypes and construct a Pyspark dataframe schema, but that could get a little complicated with structs and whatnot. One option is to build a function which could iterate through the pandas dtypes and construct a Pyspark dataframe schema, but that could get a little complicated with structs and whatnot. Is there a simpler solution?有没有更简单的解决方案?
How can I convert an empty pandas dataframe to a Pyspark dataframe and maintain the column datatypes?如何将空的 pandas dataframe 转换为 Pyspark Z6A8064B5DF47945050DZ553 列 C450550DZ553 和维护C4
If I understand correctly your problem try something with try-except block.如果我正确理解您的问题,请尝试使用 try-except 块。
def test(df):
try:
"""
What ever the operations you want on your df.
"""
except:
df = pd.DataFrame({'col_1': pd.Series(dtype='str'),
'col_2': pd.Series(dtype='float'),
'col_3': pd.Series(dtype='bool'),
})
return df
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.