![](/img/trans.png)
[英]How can I make my python code run on the AWS slave nodes using Apache-Spark?
[英]How to run code on the AWS cluster using Apache-Spark?
我已經寫了一個python代碼,用於匯總每個csv文件的第一列中的所有數字,如下所示:
import os, sys, inspect, csv
### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]
### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir
### Setup pyspark directory path
pyspark_dir = python_dir
sys.path.append(pyspark_dir)
### Import the pyspark
from pyspark import SparkConf, SparkContext
### Specify the data file directory, and load the data files
data_path = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "./test_dir")))
### myfunc is to add all numbers in the first column.
def myfunc(s):
total = 0
if s.endswith(".csv"):
cr = csv.reader(open(s,"rb"))
for row in cr:
total += int(row[0])
return total
def main():
### Initialize the SparkConf and SparkContext
conf = SparkConf().setAppName("ruofan").setMaster("spark://ec2-52-26-177-197.us-west-2.compute.amazonaws.com:7077")
sc = SparkContext(conf = conf)
datafile = sc.wholeTextFiles(data_path)
### Sent the application in each of the slave node
temp = datafile.map(lambda (path, content): myfunc(str(path).strip('file:')))
### Collect the result and print it out.
for x in temp.collect():
print x
if __name__ == "__main__":
main()
我想使用Apache-Spark使用相同的python代碼並行化幾個csv文件的求和過程。 我已經完成以下步驟:
$ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com
上傳目錄my_dir
包括將帶有csv文件的python代碼復制到群集主節點上。 $ ./spark/copy-dir my_dir
將python代碼以及csv文件發送到所有從屬節點。 我已經在主節點上設置了環境變量:
$ export SPARK_HOME=~/spark
$ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
但是,當我在主節點$ python sum.py
上運行python代碼時,它顯示以下錯誤:
Traceback (most recent call last):
File "sum.py", line 18, in <module>
from pyspark import SparkConf, SparkContext
File "/root/spark/python/pyspark/__init__.py", line 41, in <module>
from pyspark.context import SparkContext
File "/root/spark/python/pyspark/context.py", line 31, in <module>
from pyspark.java_gateway import launch_gateway
File "/root/spark/python/pyspark/java_gateway.py", line 31, in <module>
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
ImportError: No module named py4j.java_gateway
我對此錯誤一無所知。 另外,我想知道主節點是否自動調用所有從節點以並行運行。 如果有人可以幫助我,我非常感謝。
這是我調試此特定導入錯誤的方法。
$ python
運行python REPL >> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> import py4j
>> exit()
$ pip install py4j
(您需要安裝pip) $ python
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
$ python sum.py
,則>> exit()
並嘗試再次運行$ python sum.py
我認為您在問兩個單獨的問題。 看來您有導入錯誤。 是否有可能在本地計算機上安裝了與主節點上尚未安裝的py4j軟件包不同的版本?
我不禁要並行運行它。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.