簡體   English   中英

錯誤執行程序:階段1.0(TID 1)中的任務1.0中的異常java.net.NoRouteToHostException:主機沒有路由

[英]ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 1) java.net.NoRouteToHostException: No route to host

我每次遇到此錯誤時都嘗試運行一個單詞計數spark應用程序,請幫忙,以下是wordcount.scala文件,並且在sbt包之后,我運行了spark-submit命令

package main

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object WordCount {
  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("Word Count")
    val sc = new SparkContext(conf)

    val textfile = sc.textFile("file:///usr/local/spark/README.md")
    val tokenizeddata = textfile.flatMap(line => line.split(" "))
    val countprep = tokenizeddata.map(word => (word,1))
    val counts = countprep.reduceByKey((accumvalue,newvalue)=>(accumvalue+newvalue))
    val sortedcount = counts.sortBy(kvpair=>kvpair._2,false)
    sortedcount.saveAsTextFile("file:///usr/local/wordcount")
  }
}    

我運行了下一個命令。

 bin/spark-submit --class "main.WordCount" --master "local[*]" "/home/hadoop/SparkApps/target/scala-2.10/word-count_2.10-1.0.jar"

Spark程序集是使用Hive構建的,包括類路徑Java HotSpot(TM)64位服務器VM上的Datanucleus jar警告:

忽略選項MaxPermSize = 128m; 在8.0中刪除了支持15/11/28 07:38:51錯誤執行程序:階段1.0的任務1.0中的異常

(TID 1)java.net.NoRouteToHostException:在java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)處java.net.PlainSocketImpl.socketConnect(本機方法)上沒有主機的路由在java.net.AbstractPlainSocketImpl.connectToAddress( java.net上的AbstractPlainSocketImpl.java:206)java.net上的AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)java.net.Socket.connect(Socket.java:589)上的SocksSocketImpl.connect(SocksSocketImpl.java:392) ),位於sun.net.www.http.HttpClient.openServer(HttpClient.java:432)處的sun.net.NetworkClient.doConnect(NetworkClient.java:175),位於sun.net.www.http.HttpClient.openServer(HttpClient。 java.527)在sun.net.www.http.HttpClient。(HttpClient.java:211)在sun.net.www.http.HttpClient.New(HttpClient.java:308)在sun.net.www.http。 sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)上的HttpClient.New(HttpClient.java:326)sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java: 1105) t org.apache.spark.util上的sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)上的sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) org.org.apache.spark.executor.Executor $$ anonfun $ org $ apache $ spark $ executor $ Executor $$ updateDependencies $ 6.apply(Executor.scala:325)處的.utils $ .fetchFile(Utils.scala:375) .apache.spark.executor.Executor $$ anonfun $ org $ apache $ spark $ executor $ Executor $$ updateDependencies $ 6.apply(Executor.scala:323)在scala.collection.TraversableLike $ WithFilter $$ Filter $$ anonfun $ foreach $ 1.apply (TraversableLike.scala:772)在scala.collection.mutable.HashMap $$ anonfun $ foreach $ 1.apply(HashMap.scala:98)在scala.collection.mutable.HashMap $$ anonfun $ foreach $ 1.apply(HashMap.scala :98)在scala.collection.mutable.HashTable $ class.foreachEntry(HashTable.scala:226)在scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)在scala.collection.mutable.HashMap.foreach( HashMap.scala:98),位於scala.collection.TraversableLike $ WithFilter。 fororg(TraversableLike.scala:771)位於org.apache.spark.executor.Executor.org $ apache $ spark $ executor $ Executor $$ updateDependencies(Executor.scala:323)位於org.apache.spark.executor.Executor $ TaskRunner java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)的java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)的.run(Executor.scala:158)。 Thread.run(Thread.java:745)

也許您應該添加.setMaster(“ local”)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM