简体   繁体   English

PySpark Dataframe根据函数返回值创建新列

[英]PySpark Dataframe create new column based on function return value

I've a dataframe and I want to add a new column based on a value returned by a function. 我有一个数据框,我想根据一个函数返回的值添加一个新列。 The parameters to this functions are four columns from the same dataframe. 该功能的参数是来自同一数据帧的四列。 This one and this one are somewhat similar to what I want to but doesn't answer my question. 这个这个有点类似于我想要的,但是没有回答我的问题。

Here is my data frame (there are more columns then these four) 这是我的数据框(这四列之外还有更多列)

 + ------ + ------ + ------ + ------ +
 | lat1   | lng1   | lat2   | lng2   |
 + ------ + ------ + ------ + ------ +
 | -32.92 | 151.80 | -32.89 | 151.71 |
 | -32.92 | 151.80 | -32.89 | 151.71 |
 | -32.92 | 151.80 | -32.89 | 151.71 |
 | -32.92 | 151.80 | -32.89 | 151.71 |
 | -32.92 | 151.80 | -32.89 | 151.71 |
 + ------ + ------ + ------ + ------ +

and I want to add another column "distance" which is the total distance between the two location points(latitude/longitude). 我想添加另一列“距离”,它是两个位置点之间的总距离(纬度/经度)。 I've a function which takes four location points as arguments and returns the difference as Float. 我有一个函数,它将四个位置点作为参数,并将差值返回为Float。

def get_distance(lat_1, lng_1, lat_2, lng_2): 
  d_lat = lat_2 - lat_1
  d_lng = lng_2 - lng_1 

  temp = (  
  math.sin(d_lat / 2) ** 2 
    + math.cos(lat_1) 
    * math.cos(lat_2) 
    * math.sin(d_lng / 2) ** 2
  )

  return 6367.0 * (2 * math.asin(math.sqrt(temp))) 

Here is my attempt which resulted in an error and I'm not sure about this approach either, its based on the other questions I've already mentioned. 这是我的尝试,导致了错误,我也不确定这种方法,它基于我已经提到的其他问题。

udf_func = udf(lambda lat_1, lng_1, lat_2, lng_2: get_distance(lat_1, lng_1, lat_2, lng_2), returnType=FloatType())

df1 = df.withColumn('difference', udf_func(df.lat1, df_lng1, df.lat2, df.lng2))
df_subset1.show()

Here is the error stack trace 这是错误堆栈跟踪

An error occurred while calling o1300.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 50.0 failed 4 times, most recent failure: Lost task 0.3 in stage 50.0 (TID 341, data05.dac.local, executor 255): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 171, in main
    process()
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 166, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 103, in <lambda>
    func = lambda _, it: map(mapper, it)
  File "<string>", line 1, in <lambda>
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 70, in <lambda>
    return lambda *a: f(*a)
  File "<stdin>", line 2, in <lambda>
  File "<stdin>", line 5, in get_distance
TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode'
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:144)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:87)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2386)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withNewExecutionId(Dataset.scala:2788)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2385)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2392)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2128)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2127)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2818)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2127)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2342)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
    at sun.reflect.GeneratedMethodAccessor94.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 171, in main
    process()
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 166, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 103, in <lambda>
    func = lambda _, it: map(mapper, it)
  File "<string>", line 1, in <lambda>
  File "/hadoop/log/yarn/local/usercache/baiga/appcache/application_1541820416317_0349/container_e122_1541820416317_0349_01_000269/pyspark.zip/pyspark/worker.py", line 70, in <lambda>
    return lambda *a: f(*a)
  File "<stdin>", line 2, in <lambda>
  File "<stdin>", line 5, in get_distance
TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode'
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:144)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:87)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

Please guide. 请指导。

Let me rewrite it, so that people can understand the context. 让我重写一下,以便人们可以理解上下文。 There are 2 steps - 有2个步骤-

1.The DataFrame which was orignally created, was having it's columns in String format, so calculations can't be done on that. 1. DataFrame创建的DataFrame的列为String格式,因此无法对此进行计算。 Therefore, as a first step, we must convert all 4 columns into Float . 因此,第一步,我们必须将所有4列转换为Float

2.Apply UDF on this DataFrame to create a new column distance . 2.在此DataFrame上应用UDF以创建新的列distance

import math
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
df = sqlContext.createDataFrame([('-32.92','151.80','-32.89','151.71'),('-32.92','151.80','-32.89','151.71'),
                                 ('-32.92','151.80','-32.89','151.71'),('-32.92','151.80','-32.89','151.71'),
                                 ('-32.92','151.80','-32.89','151.71'),], ("lat1", "lng1", "lat2","lng2"))
print('Original Schema - columns imported as "String"')
df.printSchema()       #All colums are Strings.    
# Converting String based numbers into float.
df = df.withColumn('lat1', df.lat1.cast("float"))\
       .withColumn('lng1', df.lng1.cast("float"))\
       .withColumn('lat2', df.lat2.cast("float"))\
       .withColumn('lng2', df.lng2.cast("float"))

print('Schema after converting "String" to "Float"')
df.printSchema()       #All columns are float now.   
df.show()    
#Function defined by user, to calculate distance between two points on the globe.
def get_distance(lat_1, lng_1, lat_2, lng_2): 
    d_lat = lat_2 - lat_1
    d_lng = lng_2 - lng_1 

    temp = (  
    math.sin(d_lat / 2) ** 2 
      + math.cos(lat_1) 
      * math.cos(lat_2) 
      * math.sin(d_lng / 2) ** 2
    )
    return 6367.0 * (2 * math.asin(math.sqrt(temp))) 

udf_func = udf(get_distance,FloatType()) #Creating a 'User Defined Function' to calculate distance between two points.
df = df.withColumn("distance",udf_func(df.lat1, df.lng1, df.lat2, df.lng2)) #Creating column "distance" based on function 'get_distance'
df.show()

Output: 输出:

Original Schema - columns imported as "String"
root
 |-- lat1: string (nullable = true)
 |-- lng1: string (nullable = true)
 |-- lat2: string (nullable = true)
 |-- lng2: string (nullable = true)

Schema after converting "String" to "Float"
root
 |-- lat1: float (nullable = true)
 |-- lng1: float (nullable = true)
 |-- lat2: float (nullable = true)
 |-- lng2: float (nullable = true)

+------+-----+------+------+
|  lat1| lng1|  lat2|  lng2|
+------+-----+------+------+
|-32.92|151.8|-32.89|151.71|
|-32.92|151.8|-32.89|151.71|
|-32.92|151.8|-32.89|151.71|
|-32.92|151.8|-32.89|151.71|
|-32.92|151.8|-32.89|151.71|
+------+-----+------+------+

+------+-----+------+------+---------+
|  lat1| lng1|  lat2|  lng2| distance|
+------+-----+------+------+---------+
|-32.92|151.8|-32.89|151.71|196.45587|
|-32.92|151.8|-32.89|151.71|196.45587|
|-32.92|151.8|-32.89|151.71|196.45587|
|-32.92|151.8|-32.89|151.71|196.45587|
|-32.92|151.8|-32.89|151.71|196.45587|
+------+-----+------+------+---------+

Code works perfectly now. 代码现在可以完美运行了。

The stacktrace part about unicode suggests that the type of the column is StringType since you can't subtract two strings. 关于unicode的stacktrace部分建议列的类型为StringType,因为您不能减去两个字符串。 You can check using df.printSchema() . 您可以使用df.printSchema()进行检查。

If you convert all your lats and longs to floats (eg float(lat1) ) prior to any calculation your udf should execute fine. 如果在进行任何计算之前将所有float(lat1)转换为浮点数(例如float(lat1) ),则float(lat1)应该可以执行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM