簡體   English   中英

無法在Spark上運行TensorFlow

[英]Unable to run TensorFlow on Spark

我正在嘗試使TensorFlow在我的Spark集群上工作以使其並行運行。 首先,我嘗試按原樣使用此演示

該演示在沒有Spark的情況下效果很好,但是在使用Spark時,出現以下錯誤:

16/08/02 10:44:16 INFO DAGScheduler: Job 0 failed: collect at   /home/hdfs/tfspark.py:294, took 1.151383 s
Traceback (most recent call last):
  File "/home/hdfs/tfspark.py", line 294, in <module>
    local_labelled_images = labelled_images.collect()
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError16/08/02 10:44:17 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:45020 in memory (size: 6.4 KB, free: 419.5 MB)
16/08/02 10:44:17 INFO ContextCleaner: Cleaned accumulator 2
: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
  File "/usr/lib/python2.7/site-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
  File "/usr/lib/python2.7/site-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
.
.
.
RuntimeError: maximum recursion depth exceeded

當我使用pyspark或直接使用spark-submit時,出現相同的錯誤。

我試圖將遞歸限制增加到50000(即使它可能不是根本原因),但它沒有幫助。

由於錯誤是由6個軟件包引起的,所以我認為python 3可以修復它,但是我還沒有嘗試過,因為它可能需要在生產環境中進行調整(如果可以避免的話會更好)。

python 3是否應該與pyspark更好地配合? (我知道它可以與TensorFlow一起很好地工作)

關於如何使其與python 2兼容的任何想法嗎?

我在帶有Python 2.7.5的RHEL 7.2上的HortonWorks集群中運行TensorFlow 0.9.0 Spark 1.6.1。

謝謝

更新:

使用python 3.5進行了嘗試-獲取相同的異常。 因此,顯然無法升級到python 3。

我終於意識到根本原因是六個模塊本身-它與spark存在一些兼容性問題,並且每當加載時都會有問題。

因此,為了解決該問題,我在演示中搜索了六個軟件包的所有用法,並用python 2中的等效模塊替換了它們(例如, six.moves.urllib.response變成了urllib2 )。 刪除所有出現的六個事件后,該演示程序可以在Spark上完美運行。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM