简体   繁体   中英

Why is failing Pyspark when running an pandas_udf?

I'm getting this error when running pandas UDF in PySpark. This is the UDF that uses the external library textdistance :

def algoritmos_comparacion(num_serie_rec, num_serie_exp):
    d = textdistance.hamming(num_serie_rec, num_serie_exp)
    return str(d)

Then I register the function:

algoritmos_comparacion_udf = f.pandas_udf(algoritmos_comparacion, StringType())

And finnally I use this udf:

df.withColumn("hamming", algoritmos_comparacion_udf(f.col("num_serie_exp"), f.col("num_serie_rec")))

I have installed pandas and pyarrow version 0.8.0. I'm getting this error:

TypeError: 'Series' objects are mutable, thus they cannot be hashed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 235, in main
    process()
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 230, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/serializers.py", line 267, in dump_stream
    for series in iterator:
  File "<string>", line 1, in <lambda>
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 92, in <lambda>
    return lambda *a: (verify_result_length(*a), arrow_return_type)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/worker.py", line 83, in verify_result_length
    result = f(*a)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/pyspark.zip/pyspark/util.py", line 55, in wrapper
    return f(*args, **kwargs)
  File "/home/bguser/SII-IVA/jobs/caso3/caso3.py", line 39, in algoritmos_comparacion
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/edit_based.py", line 49, in __call__
    result = self.quick_answer(*sequences)
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/base.py", line 91, in quick_answer
    if self._ident(*sequences):
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/textdistance/algorithms/base.py", line 110, in _ident
    if e1 != e2:
  File "/DATOS/var/log/hadoop/yarn/local/usercache/bguser/appcache/application_1563894657824_0447/container_e66_1563894657824_0447_01_000002/virtualenv_application_1563894657824_0447_0/lib/python3.6/site-packages/pandas/core/generic.py", line 1556, in __nonzero__
    self.__class__.__name__
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

How can i solve this error?

To reproduce it you can run a pandas_udf using any algorithm of the library textdistance. For example:

import textdistance
import pyspark.sql.functions as f

def algoritmos_comparacion(num_serie_rec, num_serie_exp):
    data = {}
    algoritmos = {
        "hamming":textdistance.hamming,
        "levenshtein":textdistance.levenshtein,
        "damerau_levenshtein":textdistance.damerau_levenshtein,
        "jaro":textdistance.jaro,
        "mlipns":textdistance.mlipns,
        "strcmp95":textdistance.strcmp95,
        "needleman_wunsch":textdistance.needleman_wunsch,
        "gotoh":textdistance.gotoh,
        "smith_waterman":textdistance.smith_waterman
    }
    for name, alg in algoritmos.items():
        try:
            data[name] = str(alg(num_serie_rec, num_serie_exp))
        except:
            data[name] = "ERROR"
    return data

algoritmos_comparacion_udf=f.pandas_udf(algoritmos_comparacion,MapType(StringType(),StringType()))

dataframe.withColumn("algorithms", algoritmos_comparacion_udf(f.col("a"), f.col("b")))

Thanks.

Solved with this:

algoritmos_comparacion_udf=f.pandas_udf(lambda s: s.apply(algoritmos_comparacion),MapType(StringType(),StringType()))

dataframe.withColumn("algorithms", algoritmos_comparacion_udf(f.col("a"), f.col("b")))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM