[英]UPDATE table in SQL SERVER database with data in HIVE using Spark
我在 SQL 服務器中有我的主表,我想根據我的主表(在 SQL 服務器數據庫中)和目標表(在 HIVE 中)3 列匹配的條件更新表中的幾列。 兩個表都有多個列,但我只對下面突出顯示的 6 列感興趣:
我想在主表中更新的 3 列是
"INSPECTED_BY", "INSPECTION_COMMENTS" and "SIGNED_BY"
我想用作匹配條件的列是
"SERVICE_NUMBER", "PART_ID" and "LOTID"
我嘗試了下面的代碼,但它給了我一個 NullPointerException 錯誤
val df = spark.table("location_of_my_table_in_hive")
df.show(false)
df.foreachPartition(partition =>
{
val Connection = DriverManager.getConnection(SQLjdbcURL, SQLusername, SQLPassword)
val batch_size = 100
var psmt: PreparedStatement = null
partition.grouped(batch_size).foreach(batch =>
{
batch.foreach{row =>
{
val inspctbyIndex = row.fieldIndex("INSPECTED_BY")
val inspctby = row.getString(inspctbyIndex)
val inspcomIndex = row.fieldIndex("INSPECT_COMMENTS")
val inspcom = row.getString(inspcomIndex)
val signIndex = row.fieldIndex("SIGNED_BY")
val signby = row.getString(signIndex)
val sqlquery = "MERGE INTO SERVICE_LOG_TABLE as LOG" +
"USING (VALUES(?, ?, ?))" +
"AS ROW(inspctby, inspcom, signby)" +
"ON LOG.SERVICE_NUMBER = ROW.SERVICE_NUMBER and LOG.PART_ID = ROW.PART_ID and LOG.LOTID = ROW.LOTID" +
"WHEN MATCHED THEN UPDATE SET INSPECTED_BY = 'SMITH', INSPECT_COMMENTS = 'STANDARD_MET', SIGNED_BY = 'WILL'" +
"WHEN NOT MATCHED THEN INSERT VALUES(ROW.INSPECTED_BY, ROW.INSPECT_COMMENTS, ROW.SIGNED_BY)"
var psmt: PreparedStatement = Connection.prepareStatement(sqlquery)
psmt.setString(1, inspctby)
psmt.setString(2, inspcom)
psmt.setString(3, signby)
psmt.addBatch()
}
}
psmt.executeBatch()
Connection.commit()
psmt.close()
})
Connection.close()
})
這是錯誤
ERROR scheduler.TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4
times, most recent failure: Lost task 0.3 in stage 2.0 (TID 9, lwtxa0gzpappr.corp.bankofamerica.com,
executor 4): java.lang.NullPointerException
at $anonfun$1$$anonfun$apply$1.apply(/location/service_log.scala:101)
at $anonfun$1$$anonfun$apply$1.apply(/location/service_log.scala:74)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at $anonfun$1.apply(/location/service_log.scala:74)
at $anonfun$1.apply(/location/service_log.scala:68)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2121)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
我搜索了 inte.net 並找不到錯誤出現的原因。 任何幫助,將不勝感激
如果您在 spark 集群上運行它,我認為您可能必須廣播一些 object。執行者無法獲得 object 的值,因此 null 指針異常。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.