简体   繁体   English

Pyspark 错误 self._sock.recv_into(b) socket.timeout: 超时

[英]Pyspark with error self._sock.recv_into(b) socket.timeout: timed out

The goal is to use a UDF to categorize rows.目标是使用 UDF 对行进行分类。 I am using pyspark on windows.我在 windows 上使用 pyspark。

Using simple functions or operations like filter appear to work.使用像过滤器这样的简单函数或操作似乎可以工作。

Any direction on how to address the timeout/socket failure would be helpful (see error below).有关如何解决超时/套接字故障的任何指导都会有所帮助(请参阅下面的错误)。

There are no nulls in the data.数据中没有空值。

from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType,StringType

def BreakDown(arr_value):
    start_year = arr_value[0]
    start_month = arr_value[1]
    end_year = arr_value[2]
    end_month = arr_value[3]
    curr_year = arr_value[4]
    curr_month = arr_value[5]
    if   (curr_year == start_year) & (curr_month >= start_month) : return 1
    elif   (curr_year == end_year) & (curr_month <= end_month) : return 1
    elif   (curr_year > start_year) & (curr_year < end_year) : return 1
    else: return 0

    
udfBreakDown = udf(BreakDown, IntegerType())

temp = temp.withColumn('include', udfBreakDown(F.struct('start_year','start_month','end_year','end_month','curr_year','curr_month')))

PythonException: An exception was thrown from the Python worker. PythonException:从 Python 工作程序抛出异常。 Please see the stack trace below.请参阅下面的堆栈跟踪。 Traceback (most recent call last):回溯(最近一次通话最后):
File "E:\spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 585, in main File "E:\spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py", line 593, in read_int length = stream.read(4) File "C:\ProgramData\Anaconda3\lib\socket.py", line 669, in readinto return self._sock.recv_into(b) socket.timeout: timed out文件“E:\spark\spark-3.0.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py”,第 585 行,在主文件“E:\spark\spark-3.0. 1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py",第 593 行,在 read_int 长度 = stream.read(4) 文件“C:\ProgramData\Anaconda3\lib\socket.py ",第 669 行,在 readinto 中返回 self._sock.recv_into(b) socket.timeout:超时

Always avoid using UDFs when you can use Spark built-in functions.当您可以使用 Spark 内置函数时,请始终避免使用 UDF。 You can rewrite your logic using when function like this:您可以使用when function 重写您的逻辑,如下所示:

from pyspark.sql import functions as F

def get_include_col():
    c = F.when((F.col("curr_year") == F.col("start_year")) & (F.col("curr_month") >= F.col("start_month")), F.lit(1)) \
        .when((F.col("curr_year") == F.col("end_year")) & (F.col("curr_month") <= F.col("end_month")), F.lit(1)) \
        .when((F.col("curr_year") > F.col("start_year")) & (F.col("curr_year") < F.col("end_year")), F.lit(1)) \
        .otherwise(F.lit(0))
    return c


temp = temp.withColumn('include', get_include_col())

You can also use functools.reduce to dynamically generate the when expressions without having to tape all of them.您还可以使用functools.reduce动态生成 when 表达式,而无需将它们全部记录下来。 For example:例如:

import functools
from pyspark.sql import functions as F

cases = [
    ("curr_year = start_year and curr_month >= start_month", 1),
    ("curr_year = end_year and curr_month <= end_month", 1),
    ("curr_year > start_year and curr_year < end_year", 1)
]

include_col = functools.reduce(
    lambda acc, x: acc.when(F.expr(x[0]), F.lit(x[1])),
    cases,
    F
).otherwise(F.lit(0))

temp = temp.withColumn('include', include_col)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM