简体   繁体   English

Pandas 用户定义 Function Py4JJavaError

[英]Pandas User-Defined Function Py4JJavaError

I'm starting to use @pandas_udf for pyspark and while testing with their examples from documentation I find an error that I'm not able to solve.我开始将@pandas_udf用于 pyspark 并且在使用文档中的示例进行测试时,我发现了一个我无法解决的错误。

The code I'm running is:我正在运行的代码是:

from pyspark.sql import SparkSession
from pyspark.sql.functions import pandas_udf, PandasUDFType

spark = SparkSession.builder.getOrCreate()

df = spark.createDataFrame(
    [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
    ("id", "v"))

@pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
def subtract_mean(pdf):
    # pdf is a pandas.DataFrame
    v = pdf.v
    return pdf.assign(v=v - v.mean())

df.groupby("id").apply(subtract_mean).show()

And the error I get is:我得到的错误是:

Py4JJavaError: An error occurred while calling o53.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage 7.0 failed 1 times, most recent failure: Lost task 44.0 in stage 7.0 (TID 132, localhost, executor driver): java.lang.IllegalArgumentException: capacity < 0: (-1 < 0)

I'm using:我在用着:

pyspark                   2.4.5
py4j                      0.10.7            
pyarrow                   0.15.1

This is issue using PyArrow version > 0.15 with Spark 2.4.x , please follow this link to fix https://issues.apache.org/jira/browse/SPARK-29367这是使用PyArrow版本 > 0.15 和Spark 2.4.x的问题,请按照此链接修复https://issues.apache.org/jira/browse/SPARK-29367

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM