简体   繁体   English

如何将函数应用于PySpark DataFrame的指定列的每一行

[英]How to apply function to each row of specified column of PySpark DataFrame

I have a PySpark DataFrame consists of three columns, whose structure is as below. 我有一个PySpark DataFrame由三列组成,其结构如下。

In[1]: df.take(1)    
Out[1]:
[Row(angle_est=-0.006815859163590619, rwsep_est=0.00019571401752467945, cost_est=34.33651951754235)]

What I want to do is to retrieve each value of the first column ( angle_est ), and pass it as parameter xMisallignment to a defined function to set a particular property of a class object. 我想要做的是检索第一列( angle_est )的每个值,并将其作为参数xMisallignment传递给定义的函数,以设置类对象的特定属性。 The defined function is: 定义的功能是:

def setMisAllignment(self, xMisallignment):
    if np.abs(xMisallignment) > 0.8:
       warnings.warn('You might set misallignment angle too large.')
    self.MisAllignment = xMisallignment

I am trying to select the first column and convert it into rdd, and apply the above function to a map() function, but it seems it does not work, the MisAllignment did not change anyway. 我试图选择第一列并将其转换为rdd,并将上述函数应用于map()函数,但似乎它不起作用, MisAllignment无论如何都没有改变。

df.select(df.angle_est).rdd.map(lambda row: model0.setMisAllignment(row))

In[2]: model0.MisAllignment
Out[2]: 0.00111511718224

Anyone has ideas to help me let that function work? 有人有想法帮助我让这个功能起作用吗? Thanks in advance! 提前致谢!

You can register your function as spark UDF something similar to follows: 您可以将您的函数注册为spark UDF类似于以下内容:

spark.udf.register("misallign", setMisAllignment)

You can get many examples of creating and registering UDF's in this test suite: https://github.com/apache/spark/blob/master/sql/core/src/test/java/test/org/apache/spark/sql/JavaUDFSuite.java 您可以在此测试套件中获得许多创建和注册UDF的示例: https//github.com/apache/spark/blob/master/sql/core/src/test/java/test/org/apache/spark/sql /JavaUDFSuite.java

Hope it answers your question 希望它能回答你的问题

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM