繁体   English   中英

无法在 Spark 独立集群上完成 Spark 作业

[英]Unable to finish the spark job on spark standalone cluster

我是一个非常新手,只使用了一个星期的火花。 这是我在 pyspark 中的代码,运行在具有单个主节点和两个从节点的独立 Spark 集群上。 尝试运行一项作业,读取 01.0 万条记录数据并对数据执行一些操作,然后将数据帧转储到 oracle 表上。我无法完成这项工作。 看起来这个程序创建了 404 个分区来完成任务。 在控制台或终端上,我可以看到 403/404 已完成,但分区 404 上的最后一个也是最后一个任务需要永远完成作业。 我无法完成工作。 谁能告诉我我的代码的问题。 任何人都可以帮助优化火花的性能,或者可以给我指点指南或其他东西吗? 任何 tut 或指南都会有所帮助。 提前致谢

# creating a spark session
spark = SparkSession \
    .builder \
    .appName("pyspark_testing_29012020") \
    .config("spark.some.config.option", "some-value") \
    .getOrCreate()

# target table schema and column order
df_target = spark.read.csv("mycsv path", header = True)
df_local_schema = df_target.schema
df_column_order = df_target.columns

# dataframe with input file/dataset values and schema
df_source = spark.read\
    .format("csv")\
    .option("header", "false")\
    .option("inferschema", "true")\
    .option("delimiter", ",")\
    .schema(df_local_schema)\
    .load("csv path")


# dataframe with the target file/dataset values
df_target = spark.read\
    .format("jdbc") \
    .option("url", "jdbc:oracle:thin:system/oracle123@127.0.0.1:0101:orcl") \
    .option("dbtable", "mydata") \
    .option("user", "system") \
    .option("password", "oracle123") \
    .option("driver", "oracle.jdbc.driver.OracleDriver")\
    .load()


# splitting the target table in to upper and lower sections
df_target_upper = df_target.where(df_target['Key'] < 5) # set A
df_source_upper = df_source.where(df_source['Key'] < 5) # set B
df_source_lower = df_source.where(df_source['Key'] > 4) # set D
df_target_lower = df_target.where(df_target['key'] > 4) # set C


''' now programming for the upper segment of the data '''

# set operation A-B
A_minus_B = df_target_upper.join(df_source_upper,
                                 on=['key1', 'key2', 'key3', 'key4'],
                                 how='left_anti')
A_minus_B = A_minus_B.select(sorted(df_column_order))


# set operation B-A
B_minus_A = df_source_upper.join(df_target_upper,
                                 on=['key1', 'key2','key3','key4'],how = 'left_anti')
B_minus_A = B_minus_A.select(sorted(df_column_order))


# union of A-B and B-A
AmB_union_BmA = A_minus_B.union(B_minus_A)
AmB_union_BmA = AmB_union_BmA.select(sorted(df_column_order))

# A-B left anti B-A to get the uncommon record in both the dataframes
new_df = A_minus_B.join(B_minus_A, on=['key'], how = 'left_anti')
new_df = new_df.select(sorted(df_column_order))

AmB_union_BmA = AmB_union_BmA.select(sorted(df_column_order))

AnB = df_target_upper.join(df_source_upper,
                           on=['key1', 'key2', 'key3', 'key4'],
                           how='inner')

df_AnB_without_dupes = dropDupeDfCols(AnB)
new_AnB = df_AnB_without_dupes.select(sorted(df_column_order))


final_df = AmB_union_BmA.union(new_AnB)
final_df.show()
result_df = B_minus_A.union(new_df)

df_result_upper_seg = result_df.union(new_AnB)



''' now programming for the lower segment of the data '''

# set operation C-D
C_minus_D = df_target_lower.join(df_source_lower, on=['key'], how='left_anti')
C_minus_D = C_minus_D.select(sorted(df_column_order))


# set operation D-C
D_minus_C = df_source_lower.join(df_target_lower, on=['key'], how = 'left_anti')
D_minus_C = D_minus_C.select(sorted(df_column_order))


# union of C-D union D-C
CmD_union_DmC = C_minus_D.union(D_minus_C)
CmD_union_DmC = CmD_union_DmC.select(sorted(df_column_order))


# C-D left anti D-C to get the uncommon record in both the dataframes
lower_new_df = C_minus_D.join(D_minus_C, on=['key'], how = 'left_anti')
lower_new_df = lower_new_df.select(sorted(df_column_order))


CmD_union_DmC = CmD_union_DmC.select(sorted(df_column_order))

CnD = df_target_lower.join(df_source_lower,
                           on=['key'], how='inner')


new_CnD = dropDupeDfCols(CnD)
new_CnD = new_CnD.select(sorted(df_column_order))

lower_final_df = CmD_union_DmC.union(new_CnD)

result_df_lower = D_minus_C.union(lower_new_df)

df_result_lower_seg = result_df_lower.union(new_CnD)


df_final_result .write \
    .format("jdbc") \
    .mode("overwrite")\
    .option("url", "jdbc:oracle:thin:system/oracle123@127.0.0.1:1010:orcl") \
    .option("dbtable", "mydata") \
    .option("user", "system") \
    .option("password", "oracle123") \
    .option("driver", "oracle.jdbc.driver.OracleDriver") \
    .save()
  1. 看看Spark UI和监控指南
  2. 尝试将您的工作分成几个步骤。 然后找到失败的步骤。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM