简体   繁体   English

从 pyspark.sql 中的列表创建数据框

[英]Create a dataframe from a list in pyspark.sql

I am totally lost in a wired situation.我完全迷失在有线情况下。 Now I have a list li现在我有一个列表li

li = example_data.map(lambda x: get_labeled_prediction(w,x)).collect()
print li, type(li)

the output is like,输出就像,

[(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)] <type 'list'>

When I try to create a dataframe from this list:当我尝试从此列表创建数据框时:

m = sqlContext.createDataFrame(l, ["prediction", "label"])

It threw the error message:它抛出了错误消息:

TypeError                                 Traceback (most recent call last)
<ipython-input-90-4a49f7f67700> in <module>()
 56 l = example_data.map(lambda x: get_labeled_prediction(w,x)).collect()
 57 print l, type(l)
---> 58 m = sqlContext.createDataFrame(l, ["prediction", "label"])
 59 '''
 60 g = example_data.map(lambda x:gradient_summand(w, x)).sum()

/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio)
423             rdd, schema = self._createFromRDD(data, schema, samplingRatio)
424         else:
--> 425             rdd, schema = self._createFromLocal(data, schema)
426         jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
427         jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json())

/databricks/spark/python/pyspark/sql/context.py in _createFromLocal(self, data, schema)
339 
340         if schema is None or isinstance(schema, (list, tuple)):
--> 341             struct = self._inferSchemaFromList(data)
342             if isinstance(schema, (list, tuple)):
343                 for i, name in enumerate(schema):

/databricks/spark/python/pyspark/sql/context.py in _inferSchemaFromList(self, data)
239             warnings.warn("inferring schema from dict is deprecated,"
240                           "please use pyspark.sql.Row instead")
--> 241         schema = reduce(_merge_type, map(_infer_schema, data))
242         if _has_nulltype(schema):
243             raise ValueError("Some of types cannot be determined after inferring")

/databricks/spark/python/pyspark/sql/types.py in _infer_schema(row)
831         raise TypeError("Can not infer schema for type: %s" % type(row))
832 
--> 833     fields = [StructField(k, _infer_type(v), True) for k, v in items]
834     return StructType(fields)
835 

/databricks/spark/python/pyspark/sql/types.py in _infer_type(obj)
808             return _infer_schema(obj)
809         except TypeError:
--> 810             raise TypeError("not supported type: %s" % type(obj))
811 
812 

TypeError: not supported type: <type 'numpy.float64'>

But when I hard code this list in line:但是当我对这个列表进行硬编码时:

tt = sqlContext.createDataFrame([(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)], ["prediction", "label"])
tt.collect()

It works well.它运作良好。

[Row(prediction=0.0, label=59.0),
 Row(prediction=0.0, label=51.0),
 Row(prediction=0.0, label=81.0),
 Row(prediction=0.0, label=8.0),
 Row(prediction=0.0, label=86.0),
 Row(prediction=0.0, label=86.0),
 Row(prediction=0.0, label=60.0),
 Row(prediction=0.0, label=54.0),
 Row(prediction=0.0, label=54.0),
 Row(prediction=0.0, label=84.0)]

what caused this problem and how to fix it?是什么导致了这个问题以及如何解决它? Any hint will be appreciated.任何提示将不胜感激。

You have a list of float64 and I think it doesn't like that type.您有一个list of float64我认为它不喜欢这种类型。 On the other hand, when you hard code it it's just a list of float .另一方面,当您对其进行硬编码时,它只是一个list of float
Here is a question with an answer that goes over on how to convert from numpy's datatype to python's native ones.这是一个带有答案的问题,它介绍了如何将 numpy 的数据类型转换为 python 的本机数据类型。

I have had this problem, the following is my solution that use 'float()' to convert the type:我遇到了这个问题,以下是我使用“float()”转换类型的解决方案:

1. At the beginning ,it's type is np.float64 1.一开始,它的类型是np.float64

my_rdd.collect()   
output ==>  [2.8,3.9,1.2]   

2. convert the type to python float 2.将类型转换为python float

my_convert=my_rdd.map(lambda x: (float(x),)).collect()  
output ==> [(2.8,),(3.9,),(1.2,)]  

3. no error raise again 3. 不再报错

sqlContext.createDataFrame(my_convert).show()

4. for your sample ,I suggest : 4. 对于您的样品,我建议:

li = example_data.map(lambda x: get_labeled_prediction(w,x)).map(lambda y:(float(y[0]),float(y[1]))).collect()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM