[英]Error when trying to load data from Data Fusion to Salesforce
I`m getting this error when trying to load data from Data Fusion to Salesforce:尝试将数据从 Data Fusion 加载到 Salesforce 时出现此错误:
java.lang.RuntimeException: There was issue communicating with Salesforce
at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceOutputFormat.getRecordWriter(SalesforceOutputFormat.java:53) ~[1599122485492-0/:na]
at org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.initWriter(SparkHadoopWriter.scala:350) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:120) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.scheduler.Task.run(Task.scala:109) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) ~[spark-core_2.11-2.3.4.jar:2.3.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_252]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_252]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_252]
Caused by: com.sforce.async.AsyncApiException: InvalidJob : Invalid job id: null
at com.sforce.async.BulkConnection.parseAndThrowException(BulkConnection.java:182) ~[na:na]
at com.sforce.async.BulkConnection.doHttpGet(BulkConnection.java:753) ~[na:na]
at com.sforce.async.BulkConnection.getJobStatus(BulkConnection.java:769) ~[na:na]
at com.sforce.async.BulkConnection.getJobStatus(BulkConnection.java:760) ~[na:na]
at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceRecordWriter.<init>(SalesforceRecordWriter.java:69) ~[1599122485492-0/:na]
at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceOutputFormat.getRecordWriter(SalesforceOutputFormat.java:51) ~[1599122485492-0/:na]
... 10 common frames omitted
2020-09-03 08:41:28,595 - WARN [task-result-getter-0:o.a.s.ThrowableSerializationWrapper@192] - Task exception could not be deserialized
java.lang.ClassNotFoundException: Class not found in all delegated ClassLoaders: com.sforce.async.AsyncApiException
at io.cdap.cdap.common.lang.CombineClassLoader.findClass(CombineClassLoader.java:96) ~[na:na]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_252]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_252]
at io.cdap.cdap.common.lang.WeakReferenceDelegatorClassLoader.findClass(WeakReferenceDelegatorClassLoader.java:58) ~[na:na]
What does this error mean?这个错误是什么意思? I have assured the input fields match with the SObject definition.我已确保输入字段与 SObject 定义匹配。
I looked at the stacktrace, and I think I might know what the problem is.我查看了堆栈跟踪,我想我可能知道问题是什么。
The property mapred.salesforce.job.id
is undefined, and when the code executes, it goes to grab the value of this key, and as it is not defined, the job errors out.属性mapred.salesforce.job.id
未定义,当代码执行时,它会去获取这个键的值,因为它没有定义,所以作业出错。 I think you need to set the mapred.salesforce.job.id
flag as a runtime property.我认为您需要将mapred.salesforce.job.id
标志设置为运行时属性。 To do that in Data Fusion, do the following:要在 Data Fusion 中执行此操作,请执行以下操作:
system.profile.properties.
设置所需的集群属性,在所有属性名称前加上system.profile.properties.
. . For our case, I think the name would be system.profile.properties.mapred:mapred.salesforce.job.id
and the value is a number you'd want to use as your ID对于我们的案例,我认为名称将是system.profile.properties.mapred:mapred.salesforce.job.id
并且该值是您想要用作 ID 的数字
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.