簡體   English   中英

無法將Spark作業從Windows IDE提交到Linux群集

[英]Unable to submit Spark job from Windows IDE to Linux cluster

我剛剛讀到有關findspark的文章 ,發現它很有趣,因為到目前為止,我只使用了spark-submit ,它不適合在IDE上進行交互式開發。 我嘗試在Windows 10,Anaconda 4.4.0,Python 3.6.1,IPython 5.3.0,Spyder 3.1.4,Spark 2.1.1上執行此文件:

def inc(i):
    return i + 1

import findspark
findspark.init()

import pyspark
sc = pyspark.SparkContext(master='local',
                          appName='test1')

print(repr(sc.parallelize(tuple(range(10))).map(inc).collect()))

Spyder生成命令runfile('C:/tests/temp1.py', wdir='C:/tests')並打印出[1, 2, 3, 4, 5, 6, 7, 8, 9, 10] 但是,如果我嘗試使用在Ubuntu上運行的Spark集群,則會收到錯誤消息:

def inc(i):
    return i + 1

import findspark
findspark.init()

import pyspark
sc = pyspark.SparkContext(master='spark://192.168.1.57:7077',
                          appName='test1')

print(repr(sc.parallelize(tuple(range(10))).map(inc).collect()))

IPython錯誤:

Traceback (most recent call last):

  File "<ipython-input-1-820bd4275b8c>", line 1, in <module>
    runfile('C:/tests/temp.py', wdir='C:/tests')

  File "C:\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile
    execfile(filename, namespace)

  File "C:\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/tests/temp.py", line 11, in <module>
    print(repr(sc.parallelize(tuple(range(10))).map(inc).collect()))

  File "C:\projects\spark-2.1.1-bin-hadoop2.7\python\pyspark\rdd.py", line 808, in collect
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())

  File "C:\projects\spark-2.1.1-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)

  File "C:\projects\spark-2.1.1-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value
    format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling

工人標准錯誤:

ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.io.IOException: Cannot run program "C:\Anaconda3\pythonw.exe": error=2, No such file or directory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163)
    at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:89)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

由於某種原因,這試圖在Linux從站上使用Windows二進制路徑。 任何想法如何克服這個? 我在Spyder上的Python控制台上得到了相同的結果,除了錯誤是Cannot run program "C:\\Anaconda3\\python.exe": error=2, No such file or directory 實際上,在運行python temp.py時,它也從命令行python temp.py

即使從Windows提交到Linux,該版本也可以正常工作:

def inc(i):
    return i + 1

import pyspark
sc = pyspark.SparkContext(appName='test2')

print(repr(sc.parallelize(tuple(range(10))).map(inc).collect()))

spark-submit --master spark://192.168.1.57:7077 temp2.py

我找到了解決方案,事實證明這很簡單。 pyspark / context.py使用env變量PYSPARK_PYTHON來確定Python可執行文件的路徑,但默認為“正確” python 但是默認情況下,findspark 會覆蓋此env變量以匹配sys.executable ,這顯然不能跨平台工作。

無論如何,這是工作代碼,以供將來參考:

def inc(i):
    return i + 1

import findspark
findspark.init(python_path='python') # <-- so simple!

import pyspark
sc = pyspark.SparkContext(master='spark://192.168.1.57:7077',
                          appName='test1')

print(repr(sc.parallelize(tuple(range(10))).map(inc).collect()))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM