简体   繁体   中英

PySpark - Create DataFrame from Numpy Matrix

I have a numpy matrix:

arr = np.array([[2,3], [2,8], [2,3],[4,5]])

I need to create a PySpark Dataframe from arr . I can not manually input the values because the length/values of arr will be changing dynamically so I need to convert arr into a dataframe.

I tried the following code to no success.

df= sqlContext.createDataFrame(arr,["A", "B"])

However, I get the following error.

TypeError: Can not infer schema for type: <type 'numpy.ndarray'>

Hope this helps!

import numpy as np

#sample data
arr = np.array([[2,3], [2,8], [2,3],[4,5]])

rdd1 = sc.parallelize(arr)
rdd2 = rdd1.map(lambda x: [int(i) for i in x])
df = rdd2.toDF(["A", "B"])
df.show()

Output is:

+---+---+
|  A|  B|
+---+---+
|  2|  3|
|  2|  8|
|  2|  3|
|  4|  5|
+---+---+

No need to use the RDD API. Simply:

mat = np.random.random((10,3))
cols = ["ColA","ColB","ColC"]
df = spark.createDataFrame(mat.tolist(), cols)
df.show()
import numpy as np
from pyspark.ml.linalg import Vectors
arr = np.array([[2,3], [2,8], [2,3],[4,5]])
df = np.concatenate(arr).reshape(1000,-1)
dff = map(lambda x: (int(x[0]), Vectors.dense(x[1:])), df)
mydf = spark.createDataFrame(dff,schema=["label", "features"])
mydf.show(5)

Try this will work..

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM