I'd like to convert pyspark.sql.dataframe.DataFrame
to pyspark.rdd.RDD[String]
I converted a DataFrame df
to RDD data
:
data = df.rdd
type (data)
## pyspark.rdd.RDD
the new RDD data
contains Row
first = data.first()
type(first)
## pyspark.sql.types.Row
data.first()
Row(_c0=u'aaa', _c1=u'bbb', _c2=u'ccc', _c3=u'ddd')
I'd like to convert Row
to list of String
, like example below:
u'aaa',u'bbb',u'ccc',u'ddd'
Thanks
PySpark Row
is just a tuple
and can be used as such. All you need here is a simple map
(or flatMap
if you want to flatten the rows as well) with list
:
data.map(list)
or if you expect different types:
data.map(lambda row: [str(c) for c in row])
The accepted answer is old. With Spark 2.0, you must now explicitly state that you're converting to an rdd by adding .rdd
to the statement. Therefore, the equivalent of this statement in Spark 1.0:
data.map(list)
Should now be:
data.rdd.map(list)
in Spark 2.0. Related to the accepted answer in this post .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.