簡體   English   中英

Apache Spark在Hadoop YARN上工作的問題

[英]Issue with Apache Spark working on Hadoop YARN

我對大數據非常Hadoop YARN尤其是Apache Spark / Hadoop YARN

為了進行一些嘗試,我將Hadoop單節點安裝到了虛擬機中,並且我也添加了Spark。

我認為該環境安裝良好,因為我可以訪問:

然后我創建了一個pythonic文件,該文件可以計算單詞數:

from pyspark import SparkConf, SparkContext

from operator import add
import sys
## Constants
APP_NAME = " HelloWorld of Big Data"
##OTHER FUNCTIONS/CLASSES

def main(sc,filename):
   textRDD = sc.textFile(filename)
   words = textRDD.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1))
   wordcount = words.reduceByKey(add).collect()
   for wc in wordcount:
      print wc[0],wc[1]

if __name__ == "__main__":

   # Configure Spark
   conf = SparkConf().setAppName(APP_NAME)
   conf = conf.setMaster("local[*]")
   sc   = SparkContext(conf=conf)
   filename = sys.argv[1]
   # Execute Main functionality
   main(sc, filename)

我還有一個文本文件,也名為data.txt。 我使用以下命令將該文件上傳到HDFS:

hadoop fs -put data.txt hdfs://localhost:9000

我的文件位於: hdfs://localhost:9000/user/hduser

因此,感謝Spark / Hadoop,我想執行我的pythonic腳本。

我做了: ./bin/spark-submit /home/hduser/count.py /home/hduser/data.txt

但是我得到:

Traceback (most recent call last):
  File "/home/hduser/count.py", line 25, in <module>
    main(sc, filename)
  File "/home/hduser/count.py", line 13, in main
    wordcount = words.reduceByKey(add).collect()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1623, in reduceByKey
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1849, in combineByKey
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2259, in _defaultReducePartitions
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2455, in getNumPartitions
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/data.txt
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)

這很奇怪,因為我的data.txt文件進入了HDFS,但是我遇到了這個問題: Input path does not exist: hdfs://localhost:9000/home/hduser/data.txt

任何想法 ?

您的網址無效。 HDFS中沒有home文件夾。 嘗試以下方法:

./bin/spark-submit /home/hduser/count.py /user/hduser/data.txt

確保在spark-env.sh中設置了HADOOP_HOME和SPARK_HOME路徑變量。 這樣您就可以使用HDFS執行I / O操作並將作業提交到YARN群集。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM