简体   繁体   中英

TypeError converting a Pandas Dataframe to Spark Dataframe in Pyspark

Did my research, but didn't find anything on this. I want to convert a simple pandas.DataFrame to a spark dataframe, like this:

df = pd.DataFrame({'col1': ['a', 'b', 'c'], 'col2': [1, 2, 3]})
sc_sql.createDataFrame(df, schema=df.columns.tolist()) 

The error I get is:

TypeError: Can not infer schema for type: <class 'str'>

I tried something even simpler:

df = pd.DataFrame([1, 2, 3])
sc_sql.createDataFrame(df)

And I get:

TypeError: Can not infer schema for type: <class 'numpy.int64'>

Any help? Do manually need to specify a schema or so?

sc_sql is a pyspark.sql.SQLContext , I am in a jupyter notebook on python 3.4 and spark 1.6.

Thanks!

It's related to your spark version, latest update of spark makes type inference more intelligent. You could have fixed this by adding the schema like this :

mySchema = StructType([ StructField("col1", StringType(), True), StructField("col2", IntegerType(), True)])
sc_sql.createDataFrame(df,schema=mySchema)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM