简体   繁体   English

为什么pyspark中的dataframe上的withColumn会弄乱dataframe

[英]why does withColumn on dataframe in pyspark mess up dataframe

I have a dataframe with the following schema: 我有一个具有以下架构的数据框:

last_year_df.printSchema()
root
|-- invNum: string (nullable = true)
|-- custNum: string (nullable = true)
|-- entprsNum: string (nullable = true)
|-- billTp: string (nullable = true)
|-- invAmtUSD: string (nullable = true)
|-- invRevenueTp: string (nullable = true)
|-- entryDt: string (nullable = true)
|-- dueDt: string (nullable = true)
|-- settledDt: string (nullable = true)
|-- days_to_settle: integer (nullable = true)

I can do show() on the fields and describe() on days_to settle fine. 我可以在字段上执行show(),并在days_上进行describe()解决。 I want to adjust days to settle by the following function. 我想通过以下功能来调整天数以结算。

def days_adjust(days):
    if days < 0:
        ret_days = 0
    elif days > 120:
        ret_days = 120
    else:
        ret_days = days
    return ret_days

adjust_udf = udf(days_adjust, IntegerType())
last_year_df = last_year_df.withColumn("days_to_settle_adjusted",adjust_udf(last_year_df['days_to_settle']))
#last_year_df = last_year_df.withColumn("days_to_settle_adjusted", last_year_df['days_to_settle'] + 100)

last_year_df.select("settledDt").show()

After the withColumn with the udf, any action I try on the dataframe gives an error. 在带udf的withColumn之后,我尝试对数据帧执行的任何操作均产生错误。 If I do the commented withColumn without the udf, the dataframe is fine after. 如果我在不使用udf的情况下对带列的注释进行了注释,那么数据框就可以了。 Here is the error: 这是错误:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-23-900db0b86899> in <module>()
 15 #last_year_df = last_year_df.withColumn("days_to_settle_adjusted", last_year_df['days_to_settle'] + 100)
 16 
 --> 17 last_year_df.select("settledDt").show()
 18 last_year_df.select("dueDt").show()
 19 last_year_df.select("days_to_settle").show()

 /usr/local/src/spark/spark-1.6.1-bin-hadoop2.6/python/pyspark   /sql/dataframe.pyc in show(self, n, truncate)
255         +---+-----+
256         """
--> 257         print(self._jdf.showString(n, truncate))
258 
259     def __repr__(self):

/usr/local/src/spark/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
811         answer = self.gateway_client.send_command(command)
812         return_value = get_return_value(
--> 813             answer, self.gateway_client, self.target_id, self.name)
814 
815         for temp_arg in temp_args:

/usr/local/src/spark/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.pyc in deco(*a, **kw)
 43     def deco(*a, **kw):
 44         try:
 ---> 45             return f(*a, **kw)
 46         except py4j.protocol.Py4JJavaError as e:
 47             s = e.java_exception.toString()

 /usr/local/src/spark/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
306                 raise Py4JJavaError(
307                     "An error occurred while calling {0}{1}{2}.\n".
--> 308                     format(target_id, ".", name), value)
309             else:
310                 raise Py4JError(

Py4JJavaError: An error occurred while calling o317.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 54.0 failed 1 times, most recent failure: Lost task 0.0 in stage 54.0 (TID 806, localhost): java.lang.ArrayIndexOutOfBoundsException
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:966)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at java.lang.Thread.getStackTrace(Thread.java:1117)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:785)
Caused by: java.lang.ArrayIndexOutOfBoundsException
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:966)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)

Why does the withColumn with udf destroy the dateframe? 为什么withColumn和udf会破坏日期框架?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 pyspark dataframe withColumn 命令不起作用 - pyspark dataframe withColumn command not working Pyspark数据框,包含列或行最大限制 - Pyspark dataframe withcolumn or line max limit 修改 Pyspark Dataframe 与 groupby 后的 withColumn - Modify Pyspark Dataframe with withColumn after groupby Pyspark-调用空数据框时withColumn不起作用 - Pyspark - withColumn is not working while calling on empty dataframe 使用dataframe.withColumn和变量似乎不起作用 - Using dataframe.withColumn and a variable does not seem to work toPandas() 会随着 pyspark 数据框变小而加快速度吗? - Does toPandas() speed up as a pyspark dataframe gets smaller? &#39;DataFrame&#39; 对象没有属性 &#39;withColumn&#39; - 'DataFrame' object has no attribute 'withColumn' 如何根据pyspark数据帧中的某些条件将列名作为withColumn语句的一部分? - How to get column names as part of withColumn statement according to some condition in pyspark dataframe? 我们如何使用withcolumn在pyspark的数据框中创建许多新列 - How can we create many new columns in a dataframe in pyspark using withcolumn 有没有办法在不破坏函数链的情况下在 PySpark 中执行强制转换或 withColumn 数据帧操作? - Is there a way to perform a cast or withColumn dataframe operation in PySpark without breaking a function chain?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM