簡體   English   中英

找不到文件-用於在EC2上提交Apache Spark作業的API

[英]File not found - API to submit Apache Spark jobs on ec2

上下文-目標-從任何將Spark作業提交到Spark EC2群集的機器的API調用都運行良好-在Localhost上運行的Python文件-Apache Spark但是,無法在Apache Spark EC2上運行它。

澄清-

遠程將作業提交到Spark EC2集群是指將作業遠程提交到Spark EC2-(但不通過API調用)

API調用

 curl -X POST http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "" ],
  "appResource" : "wordcount.py",
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
    "SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "",
  "sparkProperties" : {
    "spark.jars" : "wordcount.py",
    "spark.driver.supervise" : "true",
    "spark.app.name" : "MyJob",
    "spark.eventLog.enabled": "true",
    "spark.submit.deployMode" : "cluster",
    "spark.master" : "spark://ec2-54-209-108-127.compute-1.amazonaws.com:6066"
  }}'
{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20160712145703-0003",
  "serverSparkVersion" : "1.6.1",
  "submissionId" : "driver-20160712145703-0003",
  "success" : true
}

要獲得響應,以下API返回錯誤-找不到文件

curl  http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/status/driver-20160712145703-0003
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "ERROR",
  "message" : "Exception from the cluster:\njava.io.FileNotFoundException: wordcount.py (No such file or directory)\n\tjava.io.FileInputStream.open(Native Method)\n\tjava.io.FileInputStream.<init>(FileInputStream.java:146)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:124)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:114)\n\torg.spark-project.guava.io.ByteSource.copyTo(ByteSource.java:202)\n\torg.spark-project.guava.io.Files.copy(Files.java:436)\n\torg.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:539)\n\torg.apache.spark.util.Utils$.copyFile(Utils.scala:510)\n\torg.apache.spark.util.Utils$.doFetchFile(Utils.scala:595)\n\torg.apache.spark.util.Utils$.fetchFile(Utils.scala:394)\n\torg.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:150)\n\torg.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:79)",
  "serverSparkVersion" : "1.6.1",
  "submissionId" : "driver-20160712145703-0003",
  "success" : true,
  "workerHostPort" : "172.31.17.189:59433",
  "workerId" : "worker-20160712083825-172.31.17.189-59433"
}

等待建議和改進。 ps-Apache Spark中的新手。

更新API調用(將主類,appArgs,appResource,clientSparkVersion設置為更新值)->

curl -X POST http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/create{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "/wordcount.py" ],
"appResource" : "file:/wordcount.py",
"clientSparkVersion" : "1.6.1",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "org.apache.spark.deploy.SparkSubmit",
"sparkProperties" : {
"spark.driver.supervise" : "false",
"spark.app.name" : "Simple App",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://ec2-54-209-108-127.compute-1.amazonaws.com:6066"
}
}

顯然,該錯誤似乎意味着在EC2實例上運行的Spark收到了提交給它的作業,它將任務委托給Worker,因為python應用程序和相關文件不在應該運行該應用程序的Worker上出現它給了文件-找不到文件錯誤因此,必須在EC2中上載應用程序,然后確保使用以下命令在所有Worker節點之間復制操作

sudo /root/spark-ec2/copy-dir /root/wordcount.py 
RSYNC'ing /root/wordcount.py to slaves...
ec2-54-175-163-32.compute-1.amazonaws.com

結果,“找不到文件”錯誤停止。 但是,在重新提交Spark Job之后,提交狀態為

{
  "action": "SubmissionStatusResponse",
  "driverState": "FAILED",
  "serverSparkVersion": "1.6.1",
  "submissionId": "driver-20160713094138-0010",
  "success": true,
  "workerHostPort": "172.31.17.189:59433",
  "workerId": "worker-20160712083825-172.31.17.189-59433"
}

因此,不確定確切的錯誤是什么,是否已解決或否,以及新的錯誤是什么

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM